Domain 2
Security Operations

Security operations and administration entails the identification of an organization’s information assets and the documentation required for the implementations of policies, standards, procedures, and guidelines that ensure confidentiality, integrity, and availability. Working with management information owners, custodians, and users, the appropriate data classification scheme is defined for proper handling of both hardcopy and electronic information.

Topics

  • The following topics are addressed in this chapter:
    • Understand and comply with Codes of Ethics
      • (ISC)2 code of ethics
      • Organizational code of ethics
    • Understand security concepts
      • Confidentiality
      • Integrity
      • Availability
      • Non-repudiation
      • Privacy
      • Least privilege
      • Separation of duties
      • Defense-in-depth
      • Risk-based controls
      • Authorization and accountability
    • Document and operate security controls
      • Deterrent controls
      • Preventative
      • Detective
      • Corrective
    • Participate in asset management
      • Lifecycle
      • Hardware
      • Software
      • Data
    • Implement and assess compliance with controls
      • Technical controls
      • Operational controls
      • Managerial controls (e.g., security policies, baselines, standards, and procedures)
    • Participate in change management duties
      • Implementation and configuration management plan
      • Security impact assessment
      • System architecture/interoperability of systems
      • Testing patches, fixes, and updates
    • Participate in security awareness and training
    • Participate in physical security operations

Objectives

A Systems Security Certified Practitioner (SSCP) is expected to demonstrate knowledge in:

  • Privacy issues
  • Data classification
  • Data integrity
  • Audit
  • Organizational roles and responsibilities
  • Policies
  • Standards
  • Guidelines
  • Procedures
  • Security awareness
  • Configuration control
  • Application of accepted industry practices

The terms security administration and security operations are often used interchangeably by organizations to refer to the set of activities performed by the security practitioner to implement, maintain, and monitor effective safeguards to meet the objectives of an organization’s information security program. In many organizations, security administrators are responsible for configuring, managing, and participating in the design of technical and administrative security controls for one or more operating system platforms or business applications, while security operations personnel primarily focus on configuring and maintaining security-specific systems such as firewalls, intrusion detection and prevention systems, and antivirus software. Placing knowledgeable practitioners in operations and administration roles is critical to security program effectiveness. This chapter focuses on the knowledge and skills needed to become an effective security administrator.

Code of Ethics

All (ISC)2-certified security practitioners must comply with the Code of Ethics, which sets forth standards of conduct and professionalism that characterize dealings with employers, business associates, customers, and the community at large. There are four mandatory tenets of the Code of Ethics:

  1. Protect society, the commonwealth, and the infrastructure.
  2. Act honorably, honestly, justly, responsibly, and legally.
  3. Provide diligent and competent service to principals.
  4. Advance and protect the profession.

Additional guidelines for performing your role in a professional manner are also provided in the Code of Conduct.

Violations of the (ISC)2 Code of Conduct are a serious matter, and may be subject to disciplinary action pursuant to a fair hearing by the Ethics Committee established by the (ISC)2 Board of Directors. The complete Code of Conduct is available at the (ISC)2 website.1

The following is an excerpt from the (ISC)2 Code of Ethics preamble and canons, by which all (ISC)2 members must abide. Compliance with the preamble and canons is mandatory to maintain membership and credentials. Professionals resolve conflicts between the canons in the order of the canons. The canons are not equal and conflicts between them are not intended to create ethical binds.

Code of Ethics Preamble

Safety of the commonwealth, duty to our principals, and to each other requires that we adhere, and be seen to adhere, to the highest ethical standards of behavior.

Therefore, strict adherence to this Code is a condition of certification.

Code of Ethics Canons

  • Protect society, the commonwealth, and the infrastructure.
  • Promote and preserve public trust and confidence in information and systems.
  • Promote the understanding and acceptance of prudent information security measures.
  • Preserve and strengthen the integrity of the public infrastructure.
  • Discourage unsafe practice.
  • Act honorably, honestly, justly, responsibly, and legally
  • Tell the truth; make all stakeholders aware of your actions on a timely basis.
  • Observe all contracts and agreements, express or implied.
  • Treat all constituents fairly. In resolving conflicts, consider public safety and duties to principals, individuals, and the profession in that order.
  • Give prudent advice; avoid raising unnecessary alarm or giving unwarranted comfort. Take care to be truthful, objective, cautious, and within your competence.
  • When resolving differing laws in different jurisdictions, give preference to the laws of the jurisdiction in which you render your service.
  • Provide diligent and competent service to principals.
  • Preserve the value of their systems, applications, and information.
  • Respect their trust and the privileges that they grant you.
  • Avoid conflicts of interest or the appearance thereof.
  • Render only those services for which you are fully competent and qualified.
  • Advance and protect the profession.
  • Sponsor for professional advancement those best qualified. All other things equal, prefer those who are certified and who adhere to these canons. Avoid professional association with those whose practices or reputation might diminish the profession.
  • Take care not to injure the reputation of other professionals through malice or indifference.
  • Maintain your competence; keep your skills and knowledge current. Give generously of your time and knowledge in training others.

Applying a Code of Ethics to Security Practitioners

In 1998, Michael Davis, a professor of Philosophy at the Illinois Institute of Technology, described a professional code of ethics as being a “contract between professionals.” In this sense, professionals cooperate in serving a unified ideal better than they could if they did not cooperate. Information security professionals serve the ideal of ensuring the integrity, confidentiality, availability, and security of information. A code of ethics for information security professionals should specify how professionals pursue their common ideals so that each may best ensure information’s security, confidentiality, integrity, and availability.

The code of ethics sets expectations for every single information security professional. Other information security professionals and even members of other professions are likely to judge your actions and behavior in relationship to this code of ethics. Beyond that, every individual is more than just a member of a profession and has responsibilites that extend beyond their code of ethics. Each person ultimately is answerable not only their own conscience, but also the perceptions, criticism, and legal ramifications of other professionals and society. As information security professionals perform their duties in a variety of unique environments and circumstances, it is important that they balance the code of ethics with legal and regulatory responsibilities.

Donn B. Parker, an information security researcher and a 2001 Fellow of the Association for Computing Machinery, identified five ethical principles that apply to processing information. These principles and how they might be applied are described in the following list:

  1. Informed consent. When contemplating any action, be sure to communicate it clearly and honestly to the people who will be affected by it. For example, if an emloyee wants a member of another team to collaborate with them on a project, the managers of both teams should be consulted to be sure that there aren’t any conflicts with overall workload, divisions of departmental responsibilities, or breaches of department-based security clearance.
  2. Higher ethic in the worst case. When considering your available courses of action, choose the actions that will cause no harm or as little harm as possible even in the worst circumstances. For instance, if a manager suspects that an emloyee might be involved in illegal or inappropriate activity in the workplace, the manager the manager might choose to check with the legal department to determine whether it is legally possible to monitor the employee’s email, which could be a potential violation of his privacy and rights.
  3. Change of scale test. When considering considering whether your action performed on an individual scale or only one time would be more harmful if you repeated it or if many others engage in the same activity. Examples include checking personal email on company time, surfing the Internet, using company software for personal use. One instance seems innocent, but when you multiply these actions repeatedly by a person or by many people, it can result in a loss of productivity (and profitibility), violation of license agreements, and violation of employment contracts.
  4. Owners’ conservation of ownership. When you own or are otherwise responsible for information, take reasonable steps to secure it and be sure to clearly communicate ownership and rights to users. For example, a company that has a public web site and a corporate network should take adequate measures to protect its customers’ and employees’ passwords, Social Security numbers, and other personal information.
  5. Users’ conservation of ownership. When you use information, you should always assume that someone owns it and protect their interests. For instance, an employee shouldn’t take software that is licensed to their employer and distribute illegal copies to their friends and family.2

Security Program Objectives: The C-I-A Triad and Beyond

The essential mission of any information security program is to protect the confidentiality, integrity, and availability of an organization’s information systems assets. Effective security controls, whether they are physical, technical (logical), or administrative, are designed and operated to meet one or more of these three requirements.

Confidentiality

Confidentiality refers to the property of information in which it is only made available to those who have a legitimate need to know. Those with a need to know may be employees, contractors and business partners, customers, or the public. Information may be grouped into a logical series of hierarchical “classes” based on the attributes of the information itself; the parties authorized to access, reproduce, or disclose the information; and the potential consequences of unauthorized access or disclosure. The level of confidentiality may also be dictated by an organization’s conduct and operating principles, its need for secrecy, its unique operating requirements, and its contractual obligations. Each level of confidentiality is associated with a particular protection class; that is, differing levels of confidentiality require different levels of protection from unauthorized or unintended disclosure. In some cases, the required level of protection—and thus the protection class or confidentiality level—is specified by laws and regulations governing the organization’s conduct.

It is important to distinguish between confidentiality and privacy. Many states have privacy laws that dictate how and for what purpose personal, nonpublic information may be accessed. However, privacy also refers to an individual’s ownership of his or her information, and includes not only the need to maintain confidentiality on a strict “need to know” basis, but also the individual’s right to exercise discretionary control over how his or her information is collected, the accuracy of the information, and how, by whom, and for what purpose the information is used.

Authorization, identity and access management, and encryption and disclosure controls are some methods for maintaining an appropriate level of confidentiality. Detective controls such as Data Leakage Prevention (DLP) tools may be used to monitor when, how, and by whom information is accessed, copied, or transmitted.

The consequences of a breach in confidentiality may include legal and regulatory fines and sanctions, loss of customer and investor confidence, loss of competitive advantage, and civil litigation. These consequences can have a damaging effect on the reputation and economic stability of an organization. When certain types of an individual consumer’s information such as personally identifying data, health records, or financial information are disclosed to unauthorized parties, consequence such as identity and monetary theft, fraud, extortion, and personal injury may result. Information central to the protection of government interests may have serious public safety and national security consequences if disclosed.

Confidentiality supports the principle of least privilege by providing that only authorized individuals, processes, or systems should have access to information on a need-to-know basis. The level of access that an authorized individual should have is at the level necessary for them to do their job. In recent years, much press has been dedicated to the privacy of information and the need to protect it from individuals, who may be able to commit crimes by viewing the information. Identity theft is the act of assuming one’s identity through knowledge of confidential information obtained from various sources.

An important measure to ensure confidentiality of information is data classification. This helps to determine who should have access to the information (public, internal use only, or confidential). Identification, authentication, and authorization through access controls are practices that support maintaining the confidentiality of information. A sample control for protecting confidentiality is to encrypt information. Encryption of information limits the usability of the information in the event it is accessible to an unauthorized person.

Integrity

Integrity is the property of information whereby it is recorded, used, and maintained in a way that ensures its completeness, accuracy, internal consistency, and usefulness for a stated purpose. Systems integrity, on the other hand, refers to the maintenance of a known good configuration and expected operational function. In both cases, the key to ensuring integrity is knowledge of state. Specifically, the ability to document and understand the state of data or a system at a certain point, creating a baseline. Going forward from that baseline, the Integrity of the data or the system can always be ascertained by comparing the baseline to the current state; if the two match, then the integrity of the data or the system is intact, if the two do not match, then the integrity of the data or the system has been compromised. integrity is a key factor in the reliability of information and systems.

Integrity controls include system edits and data validation routines invoked during data entry and update; system, file, and data access permissions; change and commitment control procedures; and secure hashing algorithms. Detective controls include system and application audit trails, balancing reports and procedures, antivirus software, and file integrity checkers.

The need to safeguard information and system integrity may be dictated by laws and regulations, such as the Sarbanes–Oxley Act of 2002, which mandates certain controls over the integrity of financial reporting. More often, it is dictated by the needs of the organization to access and use reliable, accurate information. Integrity controls such as digital signatures used to guarantee the authenticity of messages, documents, and transactions play an important role in non-repudiation (in which a sending or signing party cannot deny their action) and verifying receipt of messages. Finally, the integrity of system logs and audit trails and other types of forensic data is essential to the legal interests of an organization.

Consequences of integrity failure include an inability to read or access critical files, errors and failures in information processing, calculation errors, and uninformed decision making by business leaders. Integrity failures may also result in inaccuracies in reporting, resulting in the levying of fines and sanctions, and in inadmissibility of evidence when making certain legal claims or prosecuting crime.

Availability

Availability refers to the ability to access and use information systems when and as needed to support an organization’s operations. Systems availability requirements are often defined in service level agreements (SLAs), which specify percentage of uptime as well as support procedures and communication for planned outages. In disaster recovery planning, system recovery time objectives (RTOs) specify the acceptable duration of an unplanned outage due to catastrophic system non-availability. When designing safeguards, security practitioners must balance security requirements with the need for availability of infrastructure services and business applications.

Availability controls include hardware and software RAID (redundant array of independent disks) controllers, UPS (uninterruptable power supply), backup and recovery software and procedures, mirroring and journaling, load balancing and failover, and business continuity plans.

Consequences of availability failures include interruption in services and revenue streams, fines and sanctions for failure to provide timely information to regulatory bodies or those to whom an organization is obliged under contract, and errors in transaction processing and decision making.

Non-Repudiation

Non-repudiation is a service that ensures the sender cannot deny a message was sent and the integrity of the message is intact. NIST's SP 800-57 defines non-repudiation as:

A service that is used to provide assurance of the integrity and origin of data in such a way that the integrity and origin can be verified by a third party as having originated from a specific entity in possession of the private key of the claimed signatory. In a general information security context, assurance that the sender of information is provided with proof of delivery and the recipient is provided with proof of the sender’s identity, so neither can later deny having processed the information.3

Non-repudiation can be accomplished with digital signatures and PKI. The message is signed using the sender’s private key. When the recipient receives the message, they may use the sender's public key to validate the signature. While this proves the integrity of the message, it does not explicitly define the ownership of the private key. A certificate authority must have an association between the private key and the sender (meaning only the sender has the private key) for the non-repudiation to be valid.

Privacy

Privacy can be defined as “the rights and obligations of individuals and organizations with respect to the collection, use, retention, and disclosure of personal information.”4 Personal information is a rather generic concept and encompasses any information that is about or on an identifiable individual. Although international privacy laws are somewhat different in respect to their specific requirements, they all tend to be based on core principles or guidelines. The Organization for Economic Cooperation and Development (OECD) has broadly classified these principles into the collection limitation, data quality, purpose specification, use limitation, security safeguards, openness, individual participation, and accountability. The guidelines cover the following:

  • The collection of personal data should be limited, obtained by lawful and fair means, and done with the knowledge of the data subject.
  • The collection of personal data should be done relevant to the specific purposes for which it is to be used and should be accurate, complete, and up to date.
  • The purposes of personal data collection should be specified no later than the time of data collection. Subsequent use of the personal data should be limited to the fulfillment of the stated purposes or for compatible purposes that are specified on each occasion of change of the purpose.
  • Personal data should not be used, disclosed, or made available for purposes other than those specified above except by the authority of law or with the consent of the data subject.
  • Reasonable security safeguards should be in place against risks such as unauthorized access, loss, destruction, misuse, modification, or disclosure of data.5

A general policy of openness about developments, practices, and policies concerning personal data should be in place. There should also be a means for establishing the existence and nature of personal data, the main purposes of its use, and the identity and location of the data controller. A person should have the following rights:

  • To obtain confirmation of whether the data controller or similar party has data relating to him
  • To have communicated to him the data that relates to him in a reasonable time, at a reasonable charge, in a reasonable manner, and in an intelligible form
  • To be given valid reasons if his request for his data is denied
  • To be able to challenge a denial of his request for his data
  • To be able tp challenge data relating to him and if the challenge is successful to have the data deleted, corrected, completed, or amended.
  • To challenge data relating to him and, if the challenge is successful, to have the data erased, rectified, completed, or amended. A data controller should be accountable for complying with measures that give effect to the principles stated above.”

A data controller should be accountable for complying with measures that establish and enforce these principles.6

In most industries internationally there is a consensus that these principles should form the minimum set of requirements for the development of reasonable legislation, regulations, and policy, and that nothing prevents organizations from adding additional principles. However, the actual application of these principles has proved more difficult and costly in almost all circumstances; there has been a vast underestimation of the impact of the various privacy laws and policies both domestically and with cross-border commerce. This is not an excuse to abandon, block, or fail to comply with applicable laws, regulations, or policies. However, information security practitioners need to appreciate that business practices have changed due to the need to be in compliance (often with international regulations), and that budgets must be appropriately increased to meet the demand. Like it or not, the privacy genie is out of the bottle and there is no putting it back.

Security Best Practices

When designing and implementing a security program, the security practitioner seeks to combine the needs of the organization with industry best practices. Best practices are defined as processes and methods that have been proven by thorough testing and real-world experience to consistently lead to desired results. A best practice may set the standard for performing a particular process such as managing system access or configuring a specific type of security device, or it may be broader in scope, covering one or more aspects of a security program such as risk management or personnel security. Security practitioners should refer to best practices where available to make use of the industry knowledge and experience that has gone into their creation and avoid reinventing the wheel. Be mindful, however, that citing best practices is rarely, in itself, a sufficient argument for adopting a particular strategy for the organization. The technologies and practices that the security practitioner implements should first and foremost address the specific risks, objectives, and culture of the organization. Many best practices documents are designed with sufficient flexibility to allow a security practitioner to readily adapt their principles into the specific set of practices that best meet the unique needs of the organization.

Designing a Security Architecture

Security architecture is the practice of designing a framework for the structure and function of information security systems and practices in the organization. When developing security architecture—whether at the enterprise, business unit, or system level—security best practices should be referenced for guidance when setting design objectives. Essential best practice considerations include:

  • Defense-in-depth
  • Risk-based controls
  • Least privilege
  • Authorization and accountability
  • Separation of duties

Defense-in-Depth

There is no such thing as perfect security. Preventive measures designed to safeguard an organization’s assets can and do fail due to the presence of unknown vulnerabilities, hardware or software failures, human error, weaknesses in dependent processes, and the efforts of external attackers and malicious insiders. Reliance on a single safeguard to protect any critical asset is an invitation to a security breach. Security practitioners understand this and avoid single points of failure by designing safeguards using a layered approach.

Designing for defense-in-depth requires an understanding of the specific threats to the target asset, and the anatomy of potential attacks or attack vectors; that is, the specific means by which a particular attack can occur. Defenses may be designed to prevent or deter attack using an outside-in or inside-out approach. By placing safeguards at two or more points along the access path to the asset, failure of one safeguard can be counteracted by the function of another safeguard further along the access path. For example, a firewall protecting an organization’s web server may be designed to only allow web browsing (HTTP or HTTPS) to the server from the external network. An attacker may either circumvent the firewall policy by following an indirect path, for example, accessing a web server via a compromised host or user account with access to the server; by exploiting vulnerabilities in the firewall itself; or by using the allowed ports and protocols for purposes other than that for which they were intended. A defense-in-depth strategy might go beyond perimeter defenses, adding safeguards to the web server and hosted Web applications, for example by disabling unnecessary services such as FTP (file transfer protocol), TELNET (terminal emulation), and remote procedure calls, requiring use of a unique identifier and strong authentication method to gain access to services, implementing protections against brute force of passwords, etc. If the actual target lies downstream from the interface, further protection along the access path is advisable. For example, if the web server uses a database to store data, it can be protected by using stored procedures, using strong input validation, requiring additional authentication for people and applications, or installing a host-based intrusion prevention system on the database server. It is important to note that a true defense-in-depth strategy requires that safeguards not share a common mechanism or be dependent on one another for proper operation. This is because failure of a common mechanism causes failure of all safeguards that rely on that mechanism.

Network segmentation is also an effective way to achieve defense-in-depth for distributed or multi-tiered applications. The use of a demilitarized zone (DMZ), for example, is a common practice in security architecture. Host systems that are accessible through the firewall are physically separated from the internal network by means of secured switches, or by using an additional firewall (or multi-homed firewall) to control traffic between the web server and the internal network. Application DMZs are more frequently used today to limit access to application servers to those networks or systems that have a legitimate need to connect. The security practitioner should examine Figure 2-1 to see a logical design for network segmentation and the use of a DMZ.

c02f001.tif

Figure 2-1: Defense-in-depth through network segmentation

Although preventive controls are usually the first and primary design elements in a security architecture, no preventive mechanism is 100% foolproof. Furthermore, not all attacks can be prevented even by layering preventive safeguards along the access path to an asset without interfering with legitimate activity. For that reason, defense-in-depth design also includes detective and corrective controls along the attack path. Detective controls are designed to inform security practitioners when a preventive control fails or is bypassed. Activity logs, audit trails, accounting and balancing procedures, and intrusion detection systems (IDSes) are typical detective controls. Intrusion detection systems, which operate in real-time or near real-time, are the best choice for critical assets. Signature-based intrusion detection systems are designed to flag activity that the security practitioner has identified as suspicious or malicious. Such systems are useful for known attack scenarios. However, the so-called zero-day attacks for which no signature is yet available can evade these systems. Anomaly-based IDS have the advantage of identifying new attacks, but they must be constantly tuned as new applications or functions are introduced to avoid alarming on legitimate activity.

Finally, corrective controls seek to minimize extent or impact of damage from an attack and return compromised systems and data to a known good state. Furthermore, they seek to prevent similar attacks in the future. Corrective controls are usually manual in nature, but recent advances in intrusion detection and prevention technology (IDPS) have allowed security practitioners to place these systems in-line along the access path where they can automatically close ports, correct vulnerabilities, restore previous configurations, and redirect traffic because of a detected intrusion. Caution must be taken when implementing these systems to avoid interfering with legitimate activity, particularly when detective controls are set to automatically trigger corrective action.

Risk-Based Controls

Security has traditionally been considered “overhead” in many organizations, but this attitude is changing as more security practitioners enter the field armed with an understanding of business practices and the concept of risk-based security controls. All organizations face some degree of risk. Information security risk can be thought of as the likelihood of loss due to threats exploiting vulnerabilities; that is:

RISK = THREAT + VULNERABILITY + IMPACT

The degree of risk tells the organization what losses can be expected if security controls are absent or ineffective. The consequences or impact to assets may be tangible, as when computer equipment is lost or stolen, operations are interrupted, or fraudulent activity occurs. They may also be intangible, such as damage to an organization’s reputation, decreased motivation of staff, or loss of customer and investor confidence. A “reasonable” expectation of loss may or may not be a result of a known probability or frequency of occurrence; for critical assets, large losses could result from a single security incident and, therefore, the risk may be high even if the probability of an incident occurring is low. Conversely, highly probable events that incur minimal losses may be considered acceptable as a cost of doing business, depending on the organization’s risk tolerance, or risk appetite.

The concept of risk-based controls states that the total costs to implement and maintain a security measure should be commensurate with the degree to which risks to the confidentiality, integrity, and availability of the assets protected by the security measure must be reduced to acceptable levels. Safeguards that address multiple risks can and should be implemented to provide economies of scale wherever possible, as long as they are a part of an overall defense-in-depth strategy. The cost of safeguards includes not only capital expenses for software and equipment but also the use of resources to implement and maintain the safeguard and the impact, if any, on current business processes and productivity levels. An objective presentation of risk, including the likelihood and anticipated impact of adverse events, will help the security practitioner gain needed support from financial decision makers and line staff. Risk treatment decisions—that is, whether and to what extent to transfer, mitigate, or accept a certain level of risk—are management decisions that should be founded in an objective view of the facts. Similarly, the prioritization and selection of safeguards is guided by the extent and nature of the risks uncovered in the risk assessment.

Using a standard process for assessing and documenting risk provides consistent and repeatable results that can be readily compared, trended, and understood by decision makers. Methodologies such as the Carnegie Mellon Software Engineering Institute’s OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Assessment) and COBRA (Consultative, Objective and Bi-Functional Risk Analysis), and guidance from industry best practices such as National Institute of Standards and Technology (NIST) Special Publication 800-30 R1: “Risk Management Guide for Information Technology Systems,” enhance the credibility of results and promote more efficient and effective use of time and resources.7

A risk assessment may be qualitative or quantitative in nature; whenever possible, use quantitative data to document incident probability and impact. Data for the risk assessment may be based on internal events and historical data, surveys, interviews and questionnaires, and industry experience available through various publications and industry forums. The use of metrics and cost/benefit analyses are key success factors in gaining the organization’s buy-in for security measures. Transparency of process, open communication, and a willingness to include nontechnical management and line staff as participants in the risk assessment process make the risk assessment a collaborative effort and promote effective adoption of risk treatment recommendations.

Least Privilege

The least privilege concept is the analog of “need to know.” Under least privilege, access rights are permissions are granted based on the need of a user or process to access and use information and resources. Only those rights and privileges needed to perform a specific function are granted. Eliminating unnecessary privileges reduces the potential for errors committed by users who may not have the knowledge or skills necessary to perform certain functions, and protects against random errors such as unintentional deletion of files. Limiting the number of privileged users on critical systems and auditing the activities of those who have a high privilege level also reduces the likelihood of authorized users performing unauthorized functions. On the desktop, least privilege or least user access (LUA) is often implemented to prevent casual users from installing software, modifying system settings, or falling prey to malicious code operating in the context of the logged-in user. Some organizations assign administrators two logins, one with administrative privileges and one with ordinary user privileges, to reduce the impact of mistakes when performing routine activities that do not require administrative authority. As an alternative, some systems provide temporary augmentation of privileges under “run as” or privilege adoption schemes in which additional privileges are granted for a specific task or session, then removed when the task is complete.

Least privilege can be implemented at the operating system, application, process, file, data element, or physical security layers. Unfortunately, many COTS (commercial, off-the-shelf) applications are developed in environments that have not adopted least privilege principles and, as a result, these products often require elevated privilege to run. For desktop applications, the use of Microsoft’s Process Monitor and similar tools can identify system files, registry keys, and other protected resources accessed by the application so that policy configuration can be modified to provide specific permissions as needed. (See Figure 2-2.)

c02f002.tif

Figure 2-2: Microsoft/Sysinternals Process Monitor

However, this is time consuming and only useful in certain operating environments. When a full implementation of least privilege is not feasible or possible, adopting a defense-in-depth strategy using such things as audit logs, event monitoring, and periodic audits can be used as a compensating control strategy.

In practice, privileges are typically set by associating specific roles or groups with an access control entry. Maintaining role- or group-based privileges is much more efficient than granting these rights at an individual level, which requires frequent modifications across multiple access entries to accommodate changes in each individual’s status and job function. The groups Everyone, Public, Authenticated Users, and the like, which contain all authorized users of a system, should be associated with access control entries that grant only the minimum privileges needed to authenticate to the system.

Authorization and Accountability

Access control systems are designed with the assumption that there is an appropriate process in place to authorize individuals to specific access privileges. The decision of which privileges to grant to which individuals or groups should not be made by the security practitioner, but rather by the owners of the data or system to be accessed. A system or data owner is the individual who has the most vested interest in maintaining the confidentiality, integrity, or availability of a particular system or data set and is typically a business line manager or above. A record of authorizations should be kept to support access control system validation testing, in which actual access is compared to authorized access to determine whether the process of assigning access entitlements is working as intended and is aligned with the stated policy. Testing also helps to catch errors in assigning access privileges before a breach can occur. Documented authorizations are also used in forensic work when determining whether an incident occurred at the hands of a legitimate or illegitimate user.

Accountability is a principle that ties authorized users to their actions. Accountability is enforced through assigning individual access accounts and by generating audit trails and activity logs that link identifying information about the actor (person, system, or application) with specific events. Audit data should be protected against unintentional or malicious modification or destruction, as it is an important forensic tool. It should be backed up regularly and retained for a sufficient period to support investigations and reporting. Some regulations require a specific retention period for audit trail data. Individuals should be cautioned never to share their access with others and to protect their credentials from unauthorized use. They should be informed that any information recorded under their unique access accounts will be attributed to them; that is, they will be held accountable for any activity that occurs through use of the access privileges assigned to them.

Separation of Duties

Separation of duties is an operational security mechanism for preventing fraud and unauthorized use that requires two or more individuals to complete a task or perform a specific function. (Note: Separation of duties does not necessarily require two people to perform a task, but requires that the person performing is not the person checking on the task.) Separation of duties is a key concept of internal control and is commonly seen in financial applications that assign separate individuals to the functions of approving, performing, and auditing or balancing a transaction. This ensures that no single person operating alone can perform a fraudulent act without detection. Most COTS financial software packages have built-in mechanisms for enforcing appropriate separation of duties, using transaction segmentation and role-based access control. In nonfinancial systems, separation of duties may be implemented in any system subject to abuse or critical error to reduce the impact of a single person’s actions. For example, most program change control processes separate development, testing, quality assurance (QA), and production release functions.

Dual control is similar to a separation of duties in that it requires two or more people operating at the same time to perform a single function. Examples of dual control include use of signature plates for printing, supervisor overrides for certain transactions and adjustments, and some encryption key recovery applications.

Separation of duties does not prevent collusion; that is, cases where two or more persons cooperate to perpetuate a fraudulent act. Careful transaction balancing and review of suspicious activity and output captured in logs, transaction registers, and reports are the best methods of detecting collusion. In some organizations, additional operational security practices such as mandatory vacation periods or job rotations are enforced to provide management with an opportunity to prevent and detect collusion.

Documenting and Operating Security Controls

In addition to the best practices discussed thus far for designing a security architecture, there are several more considerations regarding controls and documentation that should be addressed.

Controls

Controls are safeguards and counter measures that are implemented to mitigate, lessen, or avoid a risk. Controls are generally grouped into different categories. US NIST uses three classes in their descriptions of controls based on definitions from US FIPS 2008 that are based on a control’s function:

  • Management—Controls based on the management of risk and the management of information systems security. These are generally policies and procedures.
  • Technical—Controls that are primarily implemented and executed through mechanisms contained in the hardware, software, and firmware of the components of the system.
  • Operational—Controls that are primarily implemented and executed by people (as opposed to systems).

In addition to these classes, US NIST has defined 18 control families based on the minimum security requirements defined in US FIPS 200. Table 2-1 illustrates their relationship to the classes.9

Table 2-1: Security Control Classes, Families, and Identifiers

IdentifierFamilyClass
ACAccess ControlTechnical
ATAwareness and TrainingOperational
AUAudit and AccountabilityTechnical
CASecurity Assessment and AuthorizationManagement
CMConfiguration ManagementOperational
CPContingency PlanningOperational
IAIdentification and AuthenticationTechnical
IRIncident ResponseOperational
MAMaintenanceOperational
MPMedia ProtectionOperational
PEPhysical and Environmental ProtectionOperational
PLPlanningManagement
PMProject ManagementManagement
PSPersonnel SecurityOperational
RARisk AssessmentManagement
SASystem and Services AcquisitionManagement
SCSystem and Communications ProtectionTechnical
SISystem and Information IntegrityOperational

Other organizations within the industry may have different terms or a different number of categories depending on their definitions and how they choose to delineate their categories. For example, some relate control categories to the time line of a security incident as illustrated in Figure 2-3.

  • Directive—Controls designed to specify acceptable rules of behavior within an organization.
  • Deterrent—Controls designed to discourage people from violating security directives.
  • Preventive—Controls implemented to prevent a security incident or information breach.
  • Compensating—Controls implemented to substitute for the loss of primary controls and mitigate risk down to an acceptable level.
  • Detective—Controls designed to signal a warning when a security control has been breached.
  • Corrective—Controls implemented to remedy circumstance, mitigate damage, or restore controls.
  • Recovery—Controls implemented to restore conditions to normal after a security incident.
c02f003.tif

Figure 2-3: Continuum of controls relative to the time line of a security incident

In either example, controls are still identified for the purpose of safeguards and countermeasures to address risk, and different labels can be used in conjunction with each other. An example is illustrated in Table 2-2.

Table 2-2: Control Example for Types and Categories

AdministrativeTechnicalPhysical
DirectivePolicyConfiguration standardsAuthorized Personnel Only signs
Traffic lights
DeterrentPolicyWarning bannerBeware of Dog sign
PreventativeUser Registration ProcedurePassword-based loginFence
DetectiveReview Violation reportsLogsSentry
CCTV
CorrectiveTerminationUnplug, isolate, and terminate connectionFire extinguisher
RecoveryDR PlanBackupsRebuild
CompensatingSupervision
Job rotation
Logging
CCTV
Keystroke logging
Layered defense

Compensating controls were mentioned in one of the examples and should be discussed further because of their importance with the risk management process. Compensating controls are introduced when the existing capabilities of a system do not support the requirements of a policy. Compensating controls can be technical, procedural, or managerial. Although an existing system may not support the required controls, there may exist other technology or processes that can supplement the existing environment, closing the gap in controls, meeting policy requirements, and reducing overall risk. For example, the access control policy may state that the authentication process must be encrypted when performed over the Internet. Adjusting an application to natively support encryption for authentication purposes may be too costly. Secure Socket Layer (SSL), an encryption protocol, can be employed and layered on top of the authentication process to support the policy statement. In addition, management processes, such as authorization, supervision, and administration, can be used to compensate for gaps in the control environment. The critical points to consider when addressing compensating controls are:

  • Do not compromise stated policy requirements.
  • Ensure that the compensating controls do not adversely affect risk or increase exposure to threats.
  • Manage all compensating controls in accordance with established practices and policies.
  • Compensating controls designated as temporary should be removed after they have served their purpose and another, more permanent control should be established.
System Security Plans

A system security plan is a comprehensive document that details the security requirements for a specific system, the controls established to meet those requirements, and the responsibilities and expected behaviors of those administering and accessing the system. It is developed with input from system and information owners, individuals with responsibility for the operation of the system, and the system security officer. The system security plan and its supporting documents are living documents that are periodically reviewed and updated to reflect changes in security requirements and in the design, function, and operation of the system. Security plans are developed and reviewed before system certification and accreditation; during these later stages of preproduction review, system security plans are analyzed, updated, and finally accepted.

Roles and responsibilities in the system security planning process include:

  • System Owner—Responsible for decisions regarding system procurement or development, implementation and integration, and operation and ongoing maintenance. The system owner has overall responsibility for system security plan development in collaboration with the information owner, system security officer, and users of the system, and for maintaining current plans. In addition, he/she is responsible for ensuring that the controls specified in the plan are operating as intended.
  • Information Owner—Has overall authority for the information stored, processed, or transmitted by the system. The information owner is responsible for specifying policies for appropriate use of information, and security requirements for protecting information in the system. He/she determines who can access the system and the privileges that will be granted to system users.
  • Security Officer—Responsible for coordinating development, review, and acceptance of security plans and for identification, implementation, administration, and assessment of security controls.
  • Authorizing Official or Approver—A senior executive or manager with the authority to assume full responsibility for the system covered in the system security plan. This is the person empowered to authorize operation of the system, and accept any residual risk that remains after security controls have been implemented.

The system security plan scope is determined by performing a boundary analysis to identify the information, hardware, system and application software, facilities, and personnel included in the overall system’s function. Typically, the resources constituting a system are under the same management control and work together to perform a discrete function or set of functions. If scoping is difficult, it may be helpful to start with the specific type or set of information assets in question, and define the hardware, software, and personnel involved in its storage, processing, and use. Larger systems may be subdivided into subsystems that perform one or more specific functions. The controls documented in the system security plan should consist of administrative, technical, and physical mechanisms to achieve the desired level of confidentiality, integrity, and availability of system assets.

The system security plan should include the following information:

  • System Name and a Unique ID—An ID is assigned to the system to aid in inventory, measurement, and configuration management.
  • System Categorization—The system should be categorized low, medium, or high (or some other relative ranking) for each element of the C-I-A triad.
  • System Owner—Identify the name, title, and organization of the system owner.
  • Authorizing Official—The name, title, and organization of the authorizing official.
  • Technical and Administrative Contacts—Contact information for personnel who have knowledge of the configuration and operation of the system.
  • Security Requirements—Security requirements for confidentiality, integrity, and availability of system resources.
  • Security Controls—Administrative, physical, and technical controls applied to meet security requirements.
  • Review and Maintenance Procedures—Roles, responsibilities, and procedures for reviewing and maintaining the system security plan.
Additional Documentation

In addition to the documentation described above, the security practitioner maintains associated documentation such as security recommendations, technical specifications, design documents, and implementation checklists to support administration and troubleshooting. General information documents may be created to provide an overview of specific security systems for new employees, or for those performing backup functions.

Security recommendations are developed to address specific issues that are identified as a result of a risk, vulnerability, or threat assessment. Recommendations may be presented as a section in a formal risk assessment report; may be presented to management as follow up to an incident report; or may be published as guidelines for developers, administrators, or users of a system. Recommendations are not mandatory actions but instead are suggested steps that, if taken, can achieve a specific security outcome.

Disaster recovery documentation is created and maintained for any critical system that must be restored onsite or offsite in the event of system interruption. System restoration does not always follow the same steps as building a new system; for example, policy and other configuration files and application data may need to be backed up and restored to bring systems back to the state they were in when the interruption occurred. A copy of disaster recovery documentation should be maintained at a sufficient distance from the organization’s offices to support restoration activities in the event of local or regionalized disaster.

Another type of documentation maintained by the security practitioner is the collection of audit and event logs, incident data, and other information captured during the course of operating the organization’s security systems. These data are necessary for system tuning, troubleshooting and problem resolution, forensics investigations, reporting, and validation of security controls. It may also be used to validate compliance with applicable policies, procedures, and regulations. Audit and event data should be protected from unauthorized access and modification, and retained for a predetermined period. Most security systems are self-documenting, although you may be required to document procedures for collecting, reviewing, and securing the data.

Secure Development and Acquisition Lifecycles

Software applications have become targets of an increasing number of malicious attacks in recent years. Web-based applications that are exposed over public networks are a natural choice for criminal hackers seeking entry points to an organization’s data and internal network infrastructure. Internal applications are also at risk due to internal fraud and abuse, logical processing errors, and simple human mistakes. While firewalls and other network security devices offer a great degree of protection, they are often bypassed by legitimate users, attackers using stolen login credentials or hijacked user sessions, and unauthorized activity conducted over allowed ports and protocols. Indirect access through a remote access gateway or compromised internal host can also bypass perimeter protections. Safeguards built into applications early in systems design are needed to counteract these threats, thwart application-level attacks, and maintain data and system integrity in the face of human error.

The security practitioner will be better able to build security into his or her organization’s application systems by actively participating through all phases of the development and acquisition process. Rather than becoming an impediment to productivity, secure design and development practices introduce efficiencies and enhance quality if they are applied consistently throughout the development life cycle. This process of building security in requires an understanding of commonly used application software development methods, common threats and vulnerabilities, and application-level safeguards.

Most organizations adhere to a standard methodology to specify, design, develop, and implement software applications. Various models exist for developing software applications. Some of the more prevalent are described below.

The Waterfall Model

The waterfall model10 consists of a linear sequence of six steps. Steps are taken in order, and as each step in the process is completed, the development team moves on to the next step. Steps in the waterfall model are:

  1. Requirement Gathering and Analysis—Various stakeholders are consulted to identify the system requirements, and the results are compiled in a requirement specification document.
  2. System Design—The requirement specifications document is consulted to determine the architecture, software requirements, hardware requirements, and other aspects of system design.
  3. Implementation—The system is now developed in small programs that are often referred to as units. Each unit is developed and tested and modified until it works well.
  4. Integration—All the units are integrated into a system. The entire system is tested for any faults and failures.
  5. Deployment—The product and its related instructions are officially released into the corporate, government, or consumer market.
  6. Maintenance—The existing product is maintained, patches for technical problems are released, functionality is updated, minor upgrades are released, and so on.

In addition to these phases, testing takes place as necessary in the process, often occuring during implementation, after integration, after deployment, and sometimes as part of maintenance.

Let's delve a little deeper into each phase.

Requirements Gathering and Analysis

A feasibility analysis usually precedes approval of any development project. In this stage, the business problem and a recommended approach are documented in the project charter. The project charter also includes a preliminary plan, resource estimates and budget, and constraints. The person designated as the project sponsor typically signs off on the charter, giving the go-ahead to proceed. Additional stakeholders may be named to participate in review and approval at key milestone points. The security practitioner who is fully hooked in to the development process will ideally be asked to participate in charter development and review.

Functional and nonfunctional requirements are documented in this phase. Functional requirements specify user interactions and system processing steps, and are often documented in the form of sequences of action called use cases, documented using Unified Modeling Language (UML). Nonfunctional requirements, such as those for performance and quality, or those imposed by design or environmental constraints, are documented using narratives and diagrams. Security requirements may be incorporated within the nonfunctional requirements specification. Examples of this type of security requirement include user and process authentication, maintaining access logs and audit trails, secure session management, and encryption of passwords and sensitive data. Functional security requirements, such as how an application responds to incorrect passwords, malformed input, or unauthorized access attempts, can be documented in the form of “abuse” or “misuse” case diagrams. Security requirements are typically derived through a risk or threat assessment conducted during this stage of development.

The project sponsor and stakeholders sign off on the completed requirements before the team begins solution development.

System Design

Software design activities are typically performed by architects or programmer analysts who are well versed in both the business process to be developed and the environment in which the application will operate. Specifications elicited during the requirements phase are documented into application events using a set of flow charts and narratives. Design may first be laid out in a general design document, which is then refined to produce specifications for the detailed design. Design walkthroughs are often held to review the design before construction to ensure that all of the requirements have been accounted for in the application design. A security architect or administrator should participate in the design phase to ensure that security design requirements are integrated with the overall application design.

Implementation

Software programming is completed in this phase. Functional design specifications, typically created by analysts, are translated into executable processes using one or more programming languages. The usual scenario has multiple programmers working concurrently on discrete functional units or modules that comprise the whole application. Each capability is separately tested by the developer before being rolled up, or integrated, with other functions.

Integration

Integration occurs when multiple functional units of code or modules that form the application are compiled and run together. This ensures, for example, that outputs produced by one process are received as expected by a downstream process.

Deployment of System

When the application has been system tested, it is installed into a controlled environment for quality assurance and user acceptance testing. At this stage, the application is considered to be in its final form, installation and operational documentation has been developed, and changes are tightly controlled. Typically, a separate QA team performs quality testing before releasing the application to end users for the final acceptance testing phase. User acceptance testing requires formal signoff from the project sponsor to indicate that the application has met all requirements. Certification and accreditation may also be required before an application project can be closed. Release management, discussed in a later section, is a set of controlled processes used to implement the final, approved application into the production environment.

Maintenance

Applications rarely remain in the form in which they were originally released to production. Changes in business needs and practices, newly discovered bugs and vulnerabilities, and changes in the technical environment all necessitate changes to production applications.

The waterfall model described above is the oldest and most widely used formal development model; it is common practice in defense applications and in large, established organizations. Key differentiators of the waterfall model are its adherence to a highly structured linear sequence of steps or phases, an emphasis on completeness of up-front requirements and design, and the use of documentation and formal approvals between phases as primary control mechanisms. Major benefits of using the waterfall method are its ease of use and management (even with large teams), and the broad scope and detailed specificity of systems documentation that is available to certification, accreditation, and application maintenance and enhancement teams. A major drawback of the waterfall model is that it assumes a static set of requirements captured before design and coding phases begin. Thus, errors may not be noticed until later testing phases, where they are more costly to address. Even in the absence of errors, new requirements may surface during development due to regulatory and operational changes and emergence of new threats that impact the automated process. To correct for this, project managers must establish rigorous change management processes to reduce the disruption, delays, and cost overruns that can occur as a result of having to retrofit new requirements into the application. The later in development requirements surface, the more potentially disruptive they are, and the more likely they are to be shelved for future development. For this reason, security practitioners are urged to actively contribute to requirements gathering and documentation, and maintain awareness of the change management process to address any functional changes that may introduce new security requirements.

Testing

Testing is not a separate phase of waterfall development projects; instead, different types of testing and debugging occur from construction to installation and beyond. Programmers perform unit testing to validate proper functioning of code at the lowest level of functionality, which can be a process, function, program, or method in the case of object-oriented programming. Automated tools built into the developer’s programming environment are often used for unit testing.

Integration testing is the next phase, in which individually tested units are assembled in functional groups and retested as a whole, following a test script or plan. Use and abuse cases generated during requirements gathering may form the basis of functional testing at this stage. The purpose of integration testing is to ensure that major application functions specified in the design are working together. All interactions, such as program calls, messages, etc., are tested in this phase.

System testing is performed on a complete, integrated system to ensure that all requirements and approved changes have been incorporated into the application. Some organizations create separate environments, such as servers, databases, files, and utilities, for the purpose of system testing. Objects in the system test environment should be secured in the same manner as in production to avoid “breaking” the application once it is released.

Additional Application Development Methods

There are several popular application development models other than the waterfall model. These include the spiral model, rapid application development, agile development, and component development and reuse.

Spiral Model

The spiral model is based on the waterfall development life cycle, but adds a repeated PDCA (Plan—Do—Check—Act) sequence at each stage of the waterfall progression. A first pass through the steps of the waterfall model is taken using a subset or high-level view of overall requirements, which are used as a basis for an initial prototype (working model) of the application. The spiral model assumes that requirements are naturally flushed out in a hierarchical way, with high-level or basic requirements giving rise to more detailed functional requirements.

Extreme Programming and Rapid Application Development

Rapid Application Development (RAD) was designed to fully leverage modern development environments that make it possible to quickly build user interface components as requirements are gathered. Application users are intimately involved in RAD projects, working with the screen flows as they are being built and providing feedback to the developers. The advantages of RAD include high error detection rates early in development, bypassing the need for extensive retrofitting and regression testing. RAD projects thus require less project change management overhead. This very fact can be a downside, however; RAD projects may suffer fatal “scope creep,” as new requirements are continually added and teams lose sight of the end goal while cycling through an unending series of prototypes.

Agile Development

Agile is built on the example of iterative development models. Agile development methods rely on feedback from application users and development teams as their primary control mechanism. Software development is seen as a continuous evolution, where results from continuous release testing are evaluated and used to enhance subsequent releases of the developing code. Enhanced team productivity, increased development speed, and a reduction in production defects are all stated benefits of agile development. IT personnel who are well versed in traditional development methods take some time to get used to agile development, and performance gains in the first year are expected to double—or better—in the second year after adopting agile methods.

Component Development and Reuse

The idea of component-based development is based on the reuse of proven design solutions to address new problems. This is not really a new concept; traditional utility programs such as date conversion routines are an example of components that have been in existence for decades. Components may be retained as design patterns, common modules, or architectural models in a component library that is searchable by developers looking to incorporate them in their applications. Component reuse reduces development and testing time and cost, increases quality and reliability, and promotes consistency and ease of maintenance among applications.

User and application authentication sequences, authorization checks, and encryption and decryption methods are examples of security functions that can be developed for reuse across applications. Some development platforms, such as J2EE (Java 2 Enterprise Edition), incorporate security methods in their programming libraries. Using prepackaged components is encouraged whenever possible to avoid vulnerabilities that can easily be introduced into homegrown security logic.

System Vulnerabilities, Secure Development, and Acquisition Practices

Exposing applications, infrastructure, and information to external users via the Internet creates an opportunity for compromise by individuals and groups wishing to steal customer data and proprietary information, interfere with operations and system availability, or damage an organization’s reputation. Vulnerabilities within web-facing applications provide opportunities for malicious attack by unauthorized users, authorized but malicious users, and malicious code executing locally or on a compromised machine connected to the application. Internal development projects should combine secure coding practices, appropriate program and infrastructure change control, and proactive vulnerability management to reduce vulnerabilities.

The Open Web Application Security Project (OWASP) provides a freely available listing of the top vulnerabilities found in Web applications; in reality, the list contains a mix of vulnerabilities and exploits that frequently occur as a result of compromised web browsers and, in some cases, web servers.11 Most of these attacks are platform independent, although specific platforms have been targeted by variants. At a minimum, coding standards and guidelines should be developed to protect applications against the OWASP Top Ten, which are in Table 2-3.12

Table 2-3: OWASP Top Ten—2013: “The Ten Most Critical Web Application Security Risks”

#Application Security RisksDescription
1InjectionInjection flaws, such as SQL, OS, and LDAP injection occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing data without proper authorization.
2Broken Authentication and Session ManagementApplication functions related to authentication and session management are often not implemented correctly, allowing attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users’ identities.
3Cross Site Scripting (XSS)XSS flaws occur whenever an application takes untrusted data and sends it to a web browser without proper validation and escaping. XSS allows attackers to execute script in the victim’s browser which can hijack user sessions, deface websites, or redirect the user to malicious sites.
4Insecure Direct Object ReferencesA direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory, or database key. Without an access control check or other protection, attackers can manipulate these references to access unauthorized data.
5Security Misconfiguration Good security requires having a secure configuration defined and deployed for the application, frameworks, application server, web server, database server, and platform. Secure settings should be defined, implemented, and maintained, as defaults are often insecure. Additionally, software should be kept up to date.
6Sensitive Data ExposureMany web applications do not properly protect sensitive data, such as credit cards, tax IDs, and authentication credentials. Attackers may steal or modify such weakly protected data to conduct credit card fraud, identity theft, or other crimes. Sensitive data deserves extra protection such as encryption at rest or in transit, as well as special precautions when exchanged with the browser.
7Missing Function Level Access ControlMost web applications verify function level access rights before making that functionality visible in the UI. However, applications need to perform the same access control checks on the server when each function is accessed. If requests are not verified, attackers will be able to forge requests in order to access functionality without proper authorization.
8Cross Site Request Forgery (CSRF)A CSRF attack forces a logged-on victim’s browser to send a forged HTTP request, including the victim’s session cookie and any other authentication information, to a vulnerable web application. This allows the attacker to force the victim’s browser to generate requests the vulnerable application thinks are legitimate requests from the victim.
9Using Components with Known VulnerabilitiesComponents, such as libraries, frameworks, and other software modules, almost always run with full privileges. If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover. Applications using components with known vulnerabilities may undermine application defenses and enable a range of possible attacks and impacts.
10Un-validated Redirects and ForwardsWeb applications frequently redirect and forward users to other pages and websites, and use untrusted data to determine the destination pages. Without proper validation, attackers can redirect victims to phishing or malware sites, or use forwards to access unauthorized pages.

As the vulnerabilities described in this chapter have illustrated, guidelines for developers should include the following areas:

  • Authentication—Use standard, secure authentication mechanisms for users and applications, including mechanisms for forgotten passwords and password changes (require the old password be entered first), secure encrypted storage of authentication credentials, and enforcement of strong passwords not susceptible to dictionary attacks.
  • Authorization—Perform authorization checks to requested objects such as files, URLs, and database entries. Secure objects from unauthorized access.
  • Session Management—Most web application platforms include session management functions that link individual requests to an authenticated user account, so that the user does not need to manually re-authenticate to each web page. Custom cookies should be avoided for authentication (user and session) where possible. Proper session management includes timing out inactive sessions, deleting session information after timeout, not passing credentials in URL strings, and using salted hashes to protect session IDs.
  • Encryption of Sensitive Data—Encryption of data at rest may not be feasible for your organization, with the exception of authentication credentials, which should always be encrypted. Encryption of sensitive data in transit is simple and inexpensive to configure and should be required of all applications. Avoid homegrown encryption methods wherever possible.
  • Input Validation—Applications should never assume that data sent to them in the form of HTTP requests, form field input, or parameters are benign and in the proper format. User input should be validated using an accepted known good approach wherever possible (matching input to a set or range of acceptable values). String length or field size limits, data types, syntax, and business rules should be enforced on all input fields.
  • Disallow Dynamic Queries—Dynamic queries and direct database access should not be allowed within web applications. Stored procedures (routines precompiled in the database, or callable as program code) should be used wherever possible. Use strongly typed, parameterized APIs for queries and stored procedure calls.
  • Out-of-Band Confirmations—Consider sending confirmations out of band when there has been a password change, or significant business transaction performed via the website.
  • Avoid Exposing System Information—Avoid exposing references to private application objects in URL strings, cookies, or user messages.
  • Error Handling—Avoid exposing information about the system or process, such as path information, stack trace and debugging information, and standard platform error messages in response to errors and consider setting a low debug level for general use. Also consider, consider using a standard default error handling routine for all application components, including those in the application environment (server operating system, application system, etc.). Ensure that error messages do not expose any information that could be exploited by hackers; for example, a return “incorrect password” indicates that the supplied UserID is valid, while “login failed” does not expose any such information. Timing attacks can be avoided by enforcing a consistent wait time on certain transactions.

Hardware/Software

IT asset management (ITAM) entails collecting inventory, financial, and contractual data to manage the IT asset throughout its life cycle. ITAM depends on robust processes, with tools to automate manual processes. Capturing and integrating auto discovery/inventory, financial, and contractual data in a central repository for all IT assets enables the functions to effectively manage vendors and a software and hardware asset portfolio from requisition through retirement, thus monitoring the asset’s performance throughout its life cycle.

A vulnerable condition is required for a successful attack on unmanaged hardware/software. Unmanaged hardware/software assets are more likely to be vulnerable to attacks; successful attacks on these assets often go undetected because no one is attending to them. The Hardware Asset Management (HWAM) capability is one of four capabilities that focus on device management. The other device management capabilities are:

  • Software Inventory Management (SWAM)
  • Configuration Setting Management (CSM)
  • Vulnerability (Patch) Management (VUL)

According to the “Continuous Diagnostics and Mitigation (CDM) Hardware Asset Management (HWAM) Capability” report published by the U.S. Department of Homeland Security:

“The Hardware Asset Management capability addresses whether someone is assigned to manage the machine and whether the machine is authorized. It does not address how well the machine is managed. Quality of management is covered by Software Asset Management (SWAM), Configuration Setting Management (CSM), and Vulnerability Management (VUL). One reason unmanaged devices are more vulnerable is that no one is actively managing software installation, configuration settings, and vulnerabilities. This leaves the software on those devices with a higher risk of successful attack. If we do not know who is managing the device, we cannot send the responsible individual(s) data to identify problems with software installed (SWAM), configuration settings (CSM), and patching (VUL). In addition, we cannot hold anyone responsible for poor management of the device.”13

For the purposes of Hardware Asset Management, a device is:

  • Any hardware asset that is addressable (i.e., has an IP address) and is connected to your organization’s network(s).These devices and their peripherals are remotely attackable.
  • Any USB device connected to a hardware asset that has an IP address. These devices are a vector to spread malware among devices.

This definition is used by FISMA and is documented on page 23 of the annual FISMA reporting instructions. Thus, not every “device” in a property inventory is included in the Hardware Asset Management definition of devices. For example, a monitor (not addressable, thus not included) can be attacked only through an addressable computer.14

According to the U.S. Department of Homeland Security report, the minimal Hardware Asset Management data recorded for desired state devices should include the following, as reproduced from the report in Table 2-4.15

Table 2-4: Minimal Hardware Asset Management Data to Be Recorded for Desired State

Data ItemJustification
Expected CPE (vendor, product, version, release level) or equivalentFor reporting device types
For supply chain management
To know what CVEs may apply to these devices
Person or organization who is responsible for managing the hardware asset (Note: Such assignments should ensure that the designee is not assigned too many assets to effectively manage them)To know who should fix specific risk conditions
To assess the responsible individuals’ risk management performance
Data necessary to link desired state inventory to actual state inventoryTo be able to identify unauthorized and unmanaged devices
Data necessary to physically locate hardware assetsSo managers can find the device to fix it
To identify mobile devices so that extra controls can be assigned
The period of time the asset is authorizedTo allow previously authorized devices to remain in the inventory, while knowing they are no longer authorized
Expected status of the device (active, inactive, stolen, missing, transferred, etc.)To know which authorized devices are not likely to be found in actual inventory
Data necessary to physically identify the asset (such as property number or serial number)To be able to validate that the remotely found device is actually this device, and not an imposter

Data

The definition of data management provided by the Data Management Association (DAMA) is: “Data management is the development, execution and supervision of plans, policies, programs, and practices that control, protect, deliver, and enhance the value of data and information assets.”16 The SSCP needs to be able to engage in data management activities in order to ensure the confidentiality, integrity, and availability of data.

Secure Information Storage

Laptops and other mobile devices are at significant risk of being lost or stolen. Sensitive mobile data may be encrypted at the file or folder level, or the entire disk may be encrypted. File/folder encryption is simpler and faster to implement, but presents exposures if the operating system or user of the machine writes data to an unencrypted location. Full disk encryption protects the entire contents of a laptop’s hard drive, including the boot sector, operating system, swap files, and user data. Since it does not rely on user discretion to determine what information to encrypt, it is typically the preferred method of protecting sensitive mobile data from unintended disclosure. There are some drawbacks; full disk encryption comes at the cost of a more complicated setup process that includes changes to the drive’s boot sequence, and it may take hours during initial implementation to encrypt the hard drive (subsequently, new data are encrypted on the fly).

Typically, disk encryption products use software-generated symmetric keys to encrypt and decrypt the contents of the drive. Keys are stored locally and protected by a password or passphrase or other authentication mechanism that is invoked at boot time to provide access to the decryption key. Devices containing a TPM (Trusted Platform Module) chip contain a unique, secret RSA key burned into the chip during manufacture to securely generate derivative keys. Using hardware encryption in conjunction with software-based encryption products is a more secure approach to protecting highly sensitive mobile data. In addition, TPM chips provide additional security features such as platform authentication and remote attestation, a form of integrity protection that makes use of a hashed copy of hardware and software configuration to verify that configurations have not been altered.

Authentication may be integrated with a network directory service such as LDAP or Active Directory, or other external authentication service or two-factor authentication method such as smart cards, hardware tokens, and biometric devices.

Encrypted data can be irrevocably lost when a disk crashes or a user forgets his or her password, unless there is a mechanism in place to recover from such events. Some software allows for a master key or passphrase that can access the data without knowledge of the user password. Recovery disks containing backed-up data and boot information can be created during the initial installation; however, for ease of administration, it is best to combine disk encryption with a network-based backup solution.

Before implementing an encryption solution, care must be taken to thoroughly test all laptop software for compatibility, particularly software that interacts directly with operating system processes. Applications used for asset tracking, desktop intrusion prevention, patch management, and desktop administration may not be able to access encrypted information. If the organization uses other mobile devices such as PDAs, encryption software should support these devices as well for ease of integration, support, and maintenance.

Encryption is a relatively straightforward and cost-effective means of protecting mobile data, but it is not a silver bullet. Encryption keys are vulnerable to discovery during encryption/decryption operations while the key data are stored in system memory, and users may inadvertently or knowingly reveal the password to unlock the decryption key. Personal firewalls, antivirus software, and appropriate physical and personnel security are essential elements of an overall mobile data protection program.

Backup tapes lost or diverted during transport and offsite storage have been another source of security breaches. Backup tapes can be encrypted during the backup operation using software built into or integrated with your backup management solution. In many implementations, the server containing the backup software acts as a client to the key management server whose function is to create, store, manage, and distribute encryption keys. Special encrypting tape drives or libraries must be used to encrypt and decrypt the tapes. When implementing a backup tape encryption solution, you may need to update internal hardware as well as hardware specified in disaster recovery equipment schedules. Thorough planning and testing should be performed to ensure that data on encrypted backup tapes will be accessible when needed.

Data residing on a storage area network (SAN) can be encrypted at various levels. Both internal and external disk array controllers offer encryption capabilities. Alternatively, data may be encrypted on the host system or at the individual disk level, offering the possibility of encrypting individual files and directories. SAN encryption implementations typically make use of a key management server, a corresponding client, and an encryption processing device integrated into the SAN infrastructure.

In an enterprise environment, sensitive data are often stored in centralized databases for access by a variety of applications. Database encryption is used to protect these data from unauthorized access by human and software agents. Database encryption has the distinct advantage of protecting sensitive data from the eyes of even the most privileged system and database administrators (except, of course, those with access to the key management system). Encryption mechanisms may be built into the database management system itself, or those provided by third-party software compatible with the database and operating system platform may be employed. Database encryption may occur at the file, database, or column/field level. It is not necessary and, in fact, it is detrimental to encrypt an entire database when only one data element is confidential. Database encryption solutions are available for most common database management systems. Implementing these solutions presents a number of challenges. When selecting a solution, the SSCP needs to understand how each addresses the following:

  • Database Size—Encryption may increase the size of data elements in your database by padding smaller chunks of data to produce fixed block sizes. If the database is not sized to accommodate these changes, it may need to be altered.
  • Performance—Performance degradation, particularly when encrypting indexed or frequently accessed fields, may be noticeable. If application performance is a concern, databases may need to be reorganized and re-indexed to accommodate the additional overhead of decrypting fields.
  • Application Compatibility—While some newer, integrated encryption solutions provide transparent decryption services to applications, most communicate through APIs, which must be compiled into business applications that access encrypted data. A thorough inventory of such applications is needed to prevent unanticipated failures, and resources will be required to modify impacted applications.

Data Scrubbing

Wholesale replication of data from production to test is a common practice. Wholesale replication of security controls from production to test is not. There is a practical reason for this; if developers do not have access to accurate representations of data to work with, test results cannot be a reliable indicator of application performance in the production environment. Organizations that outsource development or testing are especially at risk of unauthorized access to sensitive production data because of difficulties in supervising and monitoring third-party account activity. One method of addressing this issue is to sanitize the data; that is, to mask, scramble, or overwrite sensitive data values with meaningless data, which nonetheless conform to data format and size restrictions. Data sanitization is also known as scrubbing or de-identification. It is not to be confused with encryption, which implies that data can be decrypted and viewed in its original form. The goal of data sanitization is to obfuscate sensitive data in such a way that the actual data values cannot be deduced or derived from the sanitized data itself, or through inference by comparing the sanitized data values with values of other data elements (the so-called inferential disclosure). For example, if a sanitization routine transposes characters on a one-for-one basis, substituting Q for A and W for S, and so on, original values for each character of a data element may be easily guessed.

Merely replacing data with null values is not an effective way to sanitize data, as most developers require a fairly accurate representation of the actual data element to interpret test results. Masking replaces characters in specified fields with a mask character, such as an X; for example, the masked credit card number 0828 2295 2828 5447 may be represented as 0828 XXXX XXXX 5447. Since the values of particular fields (such as those containing card issuer details) may need to remain intact for adequate testing, masking data in this manner requires coordination with the development teams. Another sanitization technique is substitution, wherein certain field values are replaced with randomly generated values that have no relationship to the original value. For example, substituting salary information in a payroll database with values randomly selected from a table of salaries. This technique produces data that is true to the original format, but it may be difficult to generate and maintain tables containing the large variety and amount of random data needed to meet data sanitization requirements for large systems. Shuffling and other techniques that merely rearrange existing data—for example, reorganizing the salary column so that salaries are associated with different employees—are fast and efficient, but are only effective on large databases and do not address all the needs of de-identification, particularly if the operation is not entirely random.

One concern with applying data sanitization is the maintenance of referential integrity within the database or file system. That is, relationships between files and tables cannot be altered or broken when data are sanitized; for example, if an account number used as a primary key in one table is used as a foreign key in another table, the relationship will be broken if the account number is converted to different values in each table. Data sanitization solutions should thus allow the administrator to define all critical relationships during configuration so that consistent values are produced across database tables. Sanitization is typically a batch operation that is run when production data are copied or refreshed into the test environment. It should be performed by an administrator who is not part of the development group who maintains the test environment.

Data Deduplication

Modern storage area networks (SANs) offer the ability to deduplicate data. Deduplication is a process that scans the entire collection of information looking for similar chunks of data that can be consolidated. The security practitioner must understand the implications of deduplication on integrity and encryption. Since data deduplication involves the deletion of similar information in favor of a reference or pointer, the integrity of a file in storage may be modified. This means it may not be possible or very difficult to prove the file on the drive is the same file that was created prior to saving to the deduplicated drive. Most modern SANs offer a way to hash files for comparison before and after deduplication. However, this hashing often comes at a performance tradeoff.

Because deduplication works on finding common areas of files it should come as no surprise encryption works against deduplication. The same file encrypted with different keys by different users will not be deduplicated by the system and will take up possibly twice as much space as if the files were deduplicated. The security practitioner must work with system administrators and SAN experts to ensure the performance necessary for the system can be achieved with the security configuration required.

Managing Encryption Keys

Because encryption keys control access to sensitive data, the effectiveness of any encryption strategy hinges on an organization’s ability to securely manage these keys. Key management refers to the set of systems and procedures used to securely generate, store, distribute, use, archive, revoke, and delete keys. Defining a key management policy that identifies roles, responsibilities, and security requirements is a critical yet often overlooked component of any successful encryption strategy. The key management policy and associated documentation should be part of the organization’s overall systems security plan. Considerations for key management policy and for selecting and deploying an effective key management system include the following:

  • Roles and Responsibilities—Responsibilities for generation, approval, and maintenance of the key management system and its associated processes should be clearly articulated in the policy document. The key management system access control mechanism must operate at sufficient level of granularity to support the policy, including any provisions for separation of duties or dual control required by the organization’s security plan.
  • Key Generation and Storage—Random number generators are used to generate keys. The key generation process should produce keys of the desired length and be sufficiently random and contain sufficient entropy so that keys cannot be easily guessed. Some systems generate entropy to seed keys by collecting timing data from random system events. The server used to generate and store keys may be a software application running on a standard server operating system, or may be a purpose-built, hardened platform dedicated to key management. One advantage of using a purpose-built system is that common operating system vulnerabilities that might be exploited to compromise encryption keys can be avoided. Another is that key management roles can be separated from database and server administration roles, reducing the risk of data compromised by privileged administrator accounts. The keys themselves, key control material (such as the unique key identifier or GUID created during key generation that uniquely identifies the key within a particular name space), and key server event logs must be protected from unauthorized access.
  • Distribution—The key distribution center or facility should be capable of authenticating and checking authorization for key requests. The authentication mechanism should be sufficiently robust and protected from compromise. The client should have an integrity mechanism to validate the authenticity of the issuer and proper format of the keying material before accepting a key, and verifying receipt once the key has been accepted.
  • Expiration—Encryption keys are assigned a cryptoperiod, or time span in which they are authorized for use. In general, shorter cryptoperiods are more secure, but create logistical issues for rekeying and updating keys, especially for enterprise applications in which the cost of rekeying and re-encrypting large amounts of data can be prohibitively high. Typically, keys generated to secure stored data have longer cryptoperiods than those used to secure communications. Strong keys combined with other forms of access control can compensate for longer cryptoperiods.
  • Revocation and Destruction—The key management system should support timely revocation of expired and compromised keys, and secure destruction of key material that is no longer valid.
  • Audit and Tracking—All key management operations should be fully audited, and event logs or audit records should be protected from unauthorized access and modification. Facilities must be available to track keying material throughout its life cycle, and should include mechanisms to detect and report on key compromise. Key labeling may be used to identify attributes of keys such as its identity, cryptoperiod, key type, and the like; labels should be stored with the key for identification purposes.
  • Emergency Management—The key management policy should specify the requirements for emergency replacement and revocation of encryption keys. Availability should be protected by storing backup or archive copies of keys in a separate location. The appropriate disaster recovery procedures must be created to address both key recovery and recovery of the key management system itself.

Information Rights Management (IRM)

IRM functions to assign specific properties to an object such as how long the object may exist, what users or systems may access it, and if any notifications need to occur when the file is opened, modified or printed. IRM works extremely well in organizations where IRM is deployed wildly and users understand how to configure the restrictions of IRM when sharing information. When information is sent outside of the organization IRM can start to fall apart. For example, if a document is marked in the IRM system so only Bob can print it, within the organization and the systems supporting the IRM, only Bob can print the file. However, if Bob sends the file to Julie who doesn't have the IRM software on her computer the file may either not open or open and not enforce the IRM restrictions.

Secure Output

Many print processing functions send output to printers in the form of print spooler files, which contain human-readable copies of data to be printed. Securing spooled files is necessary to preserving confidentiality. One way to accomplish this would be to direct certain sensitive output to a secure document or print server for processing. If this is not possible, you should limit the number of individuals who have print operator authority or who are otherwise able to view, redirect, or manage output files and control functions. Printers receiving sensitive output should be located in secured areas or, in some cases, in individual offices. Users should be instructed to monitor their print jobs and pick up their output as soon as it is produced. Some printers support authentication for jobs. Some printers support encrypted file transfer and storage, which should be considered for highly sensitive documents.

Data Retention and Disposal

A record retention policy and schedule (list of records, owners, retention periods, and destruction methods) is an important component of an organization’s information handling procedures. Information owners are responsible for designating retention periods and assigning custodial duties, typically in IT, to ensure that record integrity is preserved for the specified retention period. Audits may be performed to ensure policy compliance. Many organizations use a commercial document management system to organize and automate aspects of their record retention policy.

Handling procedures for confidential information must include provisions for secure destruction of records containing sensitive information. For private industry in the United States, such procedures may be required by U.S. privacy regulations such as the Fair and Accurate Credit Transactions Act of 2003 (FACTA), HIPAA, and GLBA. Additional mandates apply to government entities and contractors working with national interest information. Records destruction should be authorized, appropriate to the level of sensitivity of the record, secure, timely, and documented. The goal of secure destruction is to assure the appropriate sanitization of sensitive information so that it is no longer legible and so that insufficient data remains to be pieced together to derive protected data elements. Secure destruction methods are designed to combat the problem of data remanence, which is generally used to refer to the information left in a record or file after the original data has been deleted, or moved to another location. Secure destruction methods include burning, shredding, disk cleaning or reformatting, and tape degaussing.

Due to environmental concerns, it is often not considered appropriate to burn paper records or disks. Instead, paper documents and CD/DVD media should be shredded using special equipment designed for this purpose. Paper shredders typically cut documents into thin strips or small fragments. They come in a variety of capacities, from personal shredders to industrial-strength models. Several types of shredders are available, such as:17

  • Strip-cut shredders—Cut paper in long, thin strips.
  • Cross-cut shredders—Preferable to strip-cut, these cut paper into small rectangular fragments.
  • Particle-cut shredders—Similar to cross-cut; creates tiny square or circular fragments.
  • Hammermills—Pound paper through a screen.
  • Granulators (or Disintegrators)—Repeatedly cut paper into fine, mesh-size particles.

Shredding services can be contracted to process large volumes of information, either onsite using mobile equipment, or at a specially designed facility. Depending on the application, such companies may require a security clearance and may only be suitable for some classification levels. For typical applications, such clearance may not be necessary, but it is wise to request a certificate of destruction, which most companies will supply on request.

Magnetic media, including diskettes, CD/DVDs, disk drives, and tapes, may be destroyed using a number of methods. Often, however, these methods are not environmentally friendly, require excessive manual effort, and are not suitable for high-volume enterprise application. CD/DVD shredders are available at nominal cost and are practical for small business units. Fixed disk shredders are also available, but they are mainly geared to the consumer market and may not produce consistent results with data in disparate formats. When disk shredders are used, they should produce fragments that contain less than one (512k) block of data. Many organizations may wish to preserve the media for reuse or redeployment to another location. For example, many organizations donate used PCs to schools or charitable organizations. Even when media is redeployed within an organization, care should be taken to remove sensitive information before the media is reused.

Cleaning/Sanitizing

Methods of destroying data contained on magnetic media include various techniques for clearing or sanitizing data. Clearing refers to any operation that removes or obscures stored data such that it cannot be reconstructed using operating system or third-party utilities. Sanitizing or purging removes data in such a way that it cannot be reconstructed at all. While disk clearing may be acceptable protection against accidental or random disclosure, it is not adequate to prevent someone with intent and commonly available tools from restoring data.

Cloud service providers should support eradication of data when deleted. Ensure when selecting a cloud provider that the provider can support overwriting and scrubbing information from the shared infrastructure when a deletion occurs.

Erasure or Reformatting

Conventional magnetic recording heads do not operate at sufficient density to totally erase the contents of disk and tape; therefore, merely erasing or reformatting magnetic media using conventional drives will not eliminate all the stored information. Furthermore, most operating system file management utilities do not delete the data itself, but instead remove the entry in the system directory or address table so that it cannot be immediately accessed. The data itself will typically remain on disk until the sector it occupies is overwritten with new data. Metadata, or schema information, usually remains intact as well. Even when metadata is erased or overwritten, forensic software that performs direct reads on the disk sectors is available to retrieve the information.

Formatting, repartitioning, or reimaging a disk drive is only slightly more secure than erasing the data. Modern disk drives cannot be reformatted by older, low-level methods that skew sector or track numbers or interleave sectors, as certain data are necessary for the servo mechanism in the drive to locate a desired track. Instead, many reformatting utilities (such as the UNIX dd utility) write a zero byte to each sector on the disk (also known as zero filling) in a single pass. High-level reformatting methods operate by creating a file system on disk and installing a boot sector. Although space is marked as available on disk and data appear to be erased, data are not actually deleted or overwritten in the reformatting process.

Disk Wiping/Overwriting

Disk wiping or overwriting, is a method of writing over existing data—typically with a stream of zeroes, ones, or a random pattern of both. Special procedures may be required, such as using certain combinations of patterns or making a certain number of passes over the disk, each time writing a different pattern. Overwriting is acceptable for clearing media for reuse, but is not a sufficient method of sanitizing disk or tape. Overwriting before reformatting is a much more effective technique than reformatting alone and can be a suitable means of clearing less sensitive content from disk.

Degaussing

Degaussing is a technique of erasing data on disk or tape (including video tapes) that, when performed properly, ensures that there is insufficient magnetic remanence to reconstruct data. This is performed with a machine called a degausser, which applies a magnetic field to the media and then removes it, eliminating the residual magnetic signals on the media. Media can be classified in terms of coercivity, or the intensity of the magnetic energy a disk or tape can store, measured in a unit called Oersteds. To perform properly, the degausser must be capable of creating a magnetic field with two to three times the intensity of the capacity of the media. Magnetic tape may be classified by coercivity as type I, II, or III, and must be degaussed with a machine rated for the type of tape employed.

Degaussers may be operated manually, or be automatic using a conveyor belt assembly. Because of the strength of the magnetic field generated, not all media can be successfully degaussed without destroying the information needed by the servo mechanism used to read the disk or tape, which would render the media unusable. This is particularly true of some disk and tape cartridges used in midrange and mainframe systems. Therefore, manufacturer’s specifications should be consulted before planning a degaussing strategy.

The U.S. NIST SP 800-88 provides a matrix (reproduced in Table 2-5) for determining requirements for clearing and sanitizing media at various levels. 18

Table 2-5: NIST SP 800-88 Matrix for Determining Requirements for Clearing and Sanitizing Media at Various Levels

CategorySubcategoryCRR ReferenceRMM ReferenceInformative References
Asset Management (AM): The data, personnel, devices, systems, and facilities that enable the organization to achieve business purposes are identified and managed consistent with their relative importance to business objectives and the organization’s risk strategy.ID.AM-1: Physical devices and systems within the organization are inventoried.AM:G2.Q1(Technology)ADM:SG1.SP1• CCS CSC 1• COBIT 5 BAI03.04, BAI09.01, BAI09.02, BAI09.05• ISA 62443-2-1:2009 4.2.3.4• ISA 62443-3-3:2013 SR 7.8• ISO/IEC 27001:2013 A.8.1.1, A.8.1.2• NIST SP 800-53 Rev. 4 CM-8
ID.AM-2: Software platforms and applications within the organization are inventoried.AM:G2.Q1(Technology)ADM:SG1.SP1• CCS CSC 2• COBIT 5 BAI03.04, BAI09.01, BAI09.02, BAI09.05• ISA 62443-2-1:2009 4.2.3.4• ISA 62443-3-3:2013 SR 7.8• ISO/IEC 27001:2013 A.8.1.1, A.8.1.2• NIST SP 800-53 Rev. 4 CM-8
ID.AM-3: Organizational communication and data flows are mapped.AM:G2.Q2ADM:SG1.SP2• CCS CSC 1• COBIT 5 DSS05.02• ISA 62443-2-1:2009 4.2.3.4• ISO/IEC 27001:2013 A.13.2.1• NIST SP 800-53 Rev. 4 AC-4, CA-3, CA-9
ID.AM-4: External information systems are catalogued.AM:G2.Q1(Technology)ADM:SG1.SP1• COBIT 5 APO02.02• ISO/IEC 27001:2013 A.11.2.6• NIST SP 500-291 3, 4• NIST SP 800-53 Rev. 4 AC-20, SA-9
ID.AM-5: Resources (e.g., hardware, devices, data, and software) are prioritized based on their classification, criticality, and business value.AM:G1.Q4ADM:SG2.SP1• COBIT 5 APO03.03, APO03.04, BAI09.02• ISA 62443-2-1:2009 4.2.3.6• ISO/IEC 27001:2013 A.8.2.1• NIST SP 800-34 Rev. 1IDENTIFY(ID)• NIST SP 800-53 Rev. 4 CP-2, RA-2, SA-14
ID.AM-6: Cybersecurity roles and responsibilities for the entire workforce and third-party stakeholders (e.g., suppliers, customers, partners) are established.AM:MIL2.Q3ADM:GG2.GP7• COBIT 5 APO01.02, DSS06.03• ISA 62443-2-1:2009 4.3.2.3.3• ISO/IEC 27001:2013 A.6.1.1• NIST SP 800-53 Rev. 4 CP-2, PM-11

Disclosure Controls: Data Leakage Prevention

Various implementations of DLP systems exist; the two most common are those that protect transfer of sensitive data to mobile storage devices such as USB keys and smartphones, and those that prevent data leakage via web and e-mail at an organization’s Internet gateway. Less prevalent are those solutions that tackle confidentiality of data at rest in files, databases, and mass storage facilities. An effective data leakage prevention strategy includes use of both host- and network-based components that perform the following functions:

  • Data Discovery—The process of “crawling” distributed files and databases to locate sensitive data is the first step in implementing data leakage prevention tools. The discovery process has intrinsic value, even without implementing loss prevention tools, in that organizations can use it to pinpoint exactly where their sensitive data are stored and design additional safeguards, such as policies and access control mechanisms, to protect the data. One may uncover, for example, cases where users run queries over sensitive data that are stored in a secured database and then save the results to their desktops or to an unsecured public file, where access control safeguards may be weaker. Note—This violates the “*” property of the Bell–LaPadula model!
  • Labeling—Data may be labeled or “tagged” with an identifier that can be used to subsequently monitor movement of that data across the network. This is particularly useful in identifying documents and files containing sensitive information. Labels used may correspond to the sensitivity levels defined in the organization’s information classification policy, or may identify specific types of data such as PHI (Private Health Information).
  • Policy Creation—Content monitoring and usage policies specify which data are sensitive, and define rules for copying or transmitting that data, typically using a combination of predefined labels, keywords, and regular expressions (e.g., nnn-nn-nnnn to identify a social security number) to identify unique data elements.
  • Content Detection/Monitoring—Data communications over local and wide area networks, data traversing perimeter gateway devices, and data leaving host computers via USB or serial connections are monitored by inspecting the contents of the communication at the file, document, and packet level. At the network layer, packet-level monitoring can be used to identify and intercept transmission of sensitive data through FTP, SSL, and posting to blogs and chat rooms among other things. Documents transferred as attachments to e-mail and instant messages can also be monitored and blocked at gateways if they contain sensitive content. To identify data transferred to removable storage, software agents are typically employed on target machines to monitor traffic over USB, wireless, and Firewire ports.
  • Prevention or Blocking—When policy violations are detected, user actions may be prevented or network traffic may be dropped, depending on the location of the violation. Alternatively, encryption may be enforced before a write operation to CD, USB, or other removable media.
  • Reporting—Violations of data disclosure policies are reported, typically showing the policy that was violated, the source IP address, and the login account under which the violation occurred.

Regardless of the method used to detect and prevent data leakage, it should be supplemented with traditional safeguards such as physical and logical access controls, encryption, and auditing. It must also be kept current to accommodate changes in applications, business processes and relationships, and infrastructure.

Technical Controls

Technical controls are security controls that the computer system executes. The controls can provide automated protection from unauthorized access or misuse, facilitate detection of security violations, and support security requirements for applications and data. The implementation of technical controls, however, always requires significant operational considerations, and should be consistent with the management of security within the organization.

Some of the technical controls that the SSCP may encounter in the organization can be identified by category. The following sections explore these categories.19

Identification and Authentication

Regardless of who accesses the system, the SSCP should discuss the identification and authentication security controls that are used to protect the system. These include the following:

  • The system’s user authentication control mechanisms along with the processes used to control changes to those mechanisms should be detailed.
  • If passwords are to be used as a control element in a system, the minimum and maximum values for password length should be provided.
  • If passwords are to be used as a control element in a system, the character sets to be used for password creation should be provided.
  • If passwords are to be used as a control element in a system, the procedures for password changes of a voluntary nature should be provided.
  • If passwords are to be used as a control element in a system, the procedures for password resets due to compromise should be provided.
  • The mechanisms used to create accountability, audit trails, and the protection of the authentication process should be described.
  • All policies that allow for the bypassing of the authentication system along with the controls used should be detailed.
  • The number of invalid access attempts to be allowed and the actions taken when that limit is exceeded should be described.
  • The procedures for key generation, distribution, storage, entry, use, archiving, and disposal should be detailed.
  • How biometric and token controls are to be used and implemented should be described.

Logical Access Controls

Logical access controls authorize or restrict the activities of users. This discussion includes hardware and software features that permit only authorized access to the system, restrict users to authorized functions and actions, and detect unauthorized activities. These include the following:

  • How access rights and privileges are granted.
  • Temporal restrictions used to prevent system access outside of allowable work periods.
  • Mechanisms used to detect unauthorized transaction attempts by authorized and/or unauthorized users.
  • Inactivity time out periods for system lockout.
  • Whether or not encryption is used to prevent access to sensitive files.
  • How separation of duties is enforced.
  • How often ACLs are reviewed.
  • Controls that regulate how users may delegate access permissions or make copies of files or information accessible to other users.

Public Access Controls

If the general public accesses the system, the following should be described or detailed:

  • Information classification schemes.
  • What form(s) of identification and authentication will be acceptable?
  • Controls to be used to limit what the user can read, write, modify, or delete.
  • Will copies of information for public access be made available on a separate system?
  • How will audit trails and user confidentiality be managed?
  • What are the requirements for system and data availability?

Audit Trails

Regardless of who is able to access the system, the SSCP should be able to describe the additional security controls used to protect the system’s integrity:

  • What is the process to review audit trails? How often are they reviewed? By whom? Under what conditions?
  • Does the audit trail support accountability by providing a trace of user actions?
  • Are there mechanisms in place to safeguard individual user privacy and confidentiality of user information (PII) captured as part of the audit trail?
  • Are audit trails designed and implemented to record appropriate information that can assist in intrusion detection and remediation?
  • Is separation of duties between those who administer the access control function and those who administer the audit trail used and enforced?

Operational Controls

Operational control policies address process-based security controls that are implemented and executed by people. Examples of operational controls may include the following:

  • Change management processes.
  • Configuration management processes.
  • Authorization processes.

Managerial Controls

Management Controls address security topics that can be characterized as managerial. They are techniques and concerns that are normally addressed by management in the organization's computer security program. In general, they focus on the management of the computer security program and the management of risk within the organization.

Security policies are formal, written documents that set the expectations for how security will be implemented and managed in an organization. Security policies may be specific, setting forth the rules or expectations for administering or managing one aspect of the security program or a particular type of technology, or they may be more general, defining the types of practices the organization will adopt to safeguard information systems assets. Policies are relatively static, and typically do not reference specific technology platforms or protocols. Because of this, policies remain current in the face of technical changes. Examples of general security policies are security program policies, which set forth the operating principles of the security program, and acceptable usage policies, which prescribe the authorized uses of information systems assets. General policies, especially those governing acceptable usage of e-mail, Internet, and other assets, may require employee signatures indicating that they have read, understood the policy, and agree to comply with it.

Subject-specific security policies typically address a limited area of risk related to a particular class of assets, type of technology, or business function. Examples of specific security policies include:

  • E-Mail and Internet Usage Policies
  • Antivirus Policy
  • Remote Access Policy
  • Information Classification Policy
  • Encryption Policies

Policy Document Format

The security practitioner should understand the basic elements of an information security policy that define and enforce the organization’s security practices. Typical policy elements include:

  • Objective—This statement provides the policy’s context. It gives background information and states the purpose for writing the policy, including the risk or threat the policy addresses and the benefits to be achieved by policy adherence.
  • Policy Statement—A succinct statement of management’s expectations for what must be done to meet policy objectives.
  • Applicability—This lists the people to whom the policy applies, the situations in which it applies, and any specific conditions under which the policy is to be in effect.
  • Enforcement—How compliance with the policy will be enforced using technical and administrative means. This includes consequences for noncompliance.
  • Roles and Responsibilities—States who is responsible for reviewing and approving, monitoring compliance, enforcing, and adhering to the policy.
  • Review—Specifies a frequency of review, or the next review date on which the policy will be assessed for currency and updated if needed.

To be effective, security policies must be endorsed by senior management, communicated to all affected parties, and enforced throughout the organization. When policy violations occur, disciplinary action commensurate with the nature of the offense must be taken quickly and consistently.

Policy Life Cycle

Security policies are living documents that communicate management expectations for behavior. Policy development begins with determining the need. The need for a policy may arise due to a regulatory obligation, in response to an operational risk, or a desire to enforce a particular set of behaviors that facilitate a safe, productive work environment. Impacted parties, such as human resources, legal, audit, and business line management, should be identified so that they can participate throughout the development process. Once the need is determined and the team assembled, the security practitioner should address the following areas:

  • State the Objective—A clear statement of policy objectives answers the question, “Why are we developing this policy?” The statement of objective will guide development of the specific points in the policy statement and will help keep team discussions in scope and focused.
  • Draft the Policy Specifics—The policy statement should be drafted in simple, clear language that will be easily understood by those who must comply with the policy. Avoid vague statements that could be open to multiple interpretations, and be sure to define all technical terms used in the policy document.
  • Identify Methods for Measurement and Enforcement—Policy enforcement mechanisms may include technical controls such as access management systems, content blocking, and other preventive measures as well as administrative controls such as management oversight and supervision. Compliance with policy expectations can be measured through audit trails, automated monitoring systems, random or routine audits, or management supervision. The means of monitoring or measuring compliance should be clearly understood, as well as the logistics of enforcement. The logistics of taking and documenting disciplinary action should be established at this time to ensure that the organization is willing and able to enforce policy, and prepared to apply corrective action quickly and consistently.
  • Communication—The timing, frequency, and mechanism by which the policy will be communicated to employees and others should be established before final policy approval. Expectations must be clearly communicated and regularly enforced so that everyone remains apprised of what the appropriate conduct is considered to be. Whenever disciplinary action may be taken in response to policy violations, it is especially important that management make every effort to ensure that employees are made aware of the policy and what they must do to comply. Some organizations require employees to sign a form acknowledging their receipt and understanding of key policies and agreeing to comply with expectations.
  • Periodic Review—Policies should be reviewed at least annually to ensure that they continue to reflect management’s expectations, current legal and regulatory obligations, and any changes to the organization’s operations. Policy violations that have occurred since the last review should be analyzed to determine whether adjustments to policy or associated procedures, or enhancements to communication and enforcement mechanisms may be needed.

Standards and Guidelines

A standard is a formal, documented requirement that sets uniform criteria for a specific technology, configuration, nomenclature, or method. Standards that are followed as common practice but are not formally documented or enforced are so-called de facto standards; such standards often become formalized as an organization matures. Some examples of security standards include account naming conventions, desktop and server antivirus settings, encryption key lengths, and router ACL (access control list) configurations. Standards provide a basis for measuring technical and operational safeguards for accuracy and consistency. A baseline is a detailed configuration standard that includes specific security settings. Baselines can be used as a checklist for configuring security parameters and for measurement and comparison of current systems to a standard configuration set.

Guidelines, on the other hand, are recommended practices to be followed to achieve a desired result. They are not mandatory and provide room for flexibility in how they are interpreted and implemented; therefore, they are rarely enforced except through an organization’s culture and norms. Guidelines are often instructional in nature. They are useful for cases where an organization wishes to provide enough structure to achieve an acceptable level of performance while allowing room for innovation and individual discretion. Some examples of security guidelines include methods for selecting a strong password, criteria for evaluating new security technology, and suggested training curricula for security staff.

Standards, baselines, procedures and, to a lesser extent, guidelines, help organizations maintain consistency in the way security risks are addressed, and thus provide assurance that a desirable level of security will be maintained. For example, a desktop antivirus standard might specify that all desktops be maintained at the current version of software, configured to receive automatic updates, and set to scan all executable files and templates whenever a file is opened or modified. By consistently applying the same criteria across the board, all desktops are equally protected from virus threats and this can be assured through periodic scanning or auditing. In addition, unlike guidelines, standards specify repeatable configurations. This enhances productivity by allowing practitioners to develop reusable templates that can be applied quickly and easily either manually, or through automated configuration management tools. Similarly, programmers can create standard security logic for functions such as login sequences, input validation, and authority checks, and store this logic in a central repository as reusable components that can be compiled into new business applications. This saves time and effort, reduces errors, and ensures enforcement of application security rules during the development life cycle.

Standards differ from policies in that they are typically more technical in nature, are more limited in scope and impact, do not require approval from executive management, and are more likely than policies to change over time. Standards are often developed to implement the details of a particular policy. Because of their more detailed technical nature, security practitioners responsible for administering security systems, applications, and network components typically play a more active role in the development of standards than in policy development, which largely occurs at management levels in an organization. Many organizations have formal standards and procedures review committees composed of IT practitioners, whose role is to assist in the development of standards documents, review documents for clarity, accuracy, and completeness, identify impacts, and often implement approved standards.

A baseline is a special type of standard that specifies the minimum set of security controls that must be applied to a particular system or practice area in order to achieve an acceptable level of assurance. Baselines may be derived from best practices frameworks and further developed according to your organization’s unique needs. They are often documented in the form of checklists that can be used by teams to specify the minimum security requirements for new and enhanced systems.

Standards should be reviewed at specified intervals, at a minimum annually, to ensure they remain current. Standards must often be modified in response to:

  • Introduction of new technology
  • Addition of configurable features to a system
  • Change in business operations
  • Need for additional controls in response to new threats or vulnerabilities

External to the organization, the term standards is used in two special contexts: industry standards and open standards. Industry standards are generally accepted formats, protocols, or practices developed within the framework of a specific industrial segment, such as engineering, computer programming, or telecommunications. Industry standards may be developed by leading manufacturers, such as IBM’s ISA (Industry Standard Architecture) PC bus standard, for use in their equipment and compatible equipment developed by other vendors. They may also be developed by special interest groups such as the Institute of Electrical and Electronic Engineers (IEEE), American National Standards Institute (ANSI), or International Telecommunications Union (ITU). Industry standards are not always formally developed and accepted, but may be so widely adopted by organizations across industries that they become necessary for the industry to incorporate them in products and services in order to serve their customers’ needs. Examples of these de facto industry standards include the PCL print control language developed by Hewlett-Packard and the Postscript laser printer page description language developed by Adobe.

In contrast to industry standards, open standards are specifications that are developed by standards bodies or consortia and made available for public use without restrictions. They are designed to promote free competition, portability, and interoperability among different implementations. The standards themselves are platform independent, and are published as source code or as a set of detailed specification documents that can be used to develop new products and services that can be integrated into existing, standards-based products.

Development of open standards follows due process and is a collaborative venture. Typically, an expert committee or working group develops a draft and publishes it for peer review and open comment before formalizing the results in a standards document. Some bodies such as the Internet Engineering Task Force (IETF) produce documents called RFCs (Request for Comment) for this purpose. Formal approval completes the process of codifying a standard for final publication. The Internet and many of its protocols and services are based on open standards. For example, TCP/IP (Transmission Control Protocol/Internet Protocol) is an open standard that is implemented in every operating system and network device. Just imagine what would happen if vendors decided to implement their own version of TCP/IP—the Internet would simply cease to function.

Organizations choosing to adopt open standards find that they are not as locked in to specific vendor solutions and proprietary interoperability requirements, and may be able to streamline their approach to securing and managing distributed systems. Open standards allow such organizations to adopted a “best of breed” strategy when selecting security and other technologies—that is, where individual solutions are selected for depth of functionality and ability to meet most requirements—rather than a uniform, single-vendor strategy that may be comprehensive yet not provide all the functionality desired in individual components. When all components of a security system are implemented according to open standards, they can work together without cumbersome manual intervention to provide a comprehensive security solution.

Note that the term “open standards” should never be confused with open source. Open source is software that is freely distributed and available for modification and use at little or no cost. To be considered open source, software must be distributed in source code and compiled form; must allow for derivative works; must not discriminate against any person, group, or use; and must not be tied to any specific product or restrict distribution of any associated software with the license. Licensing is covered by public agreement, which typically includes “copyleft” provisions requiring that any derived works be redistributed to the community as open source. A popular example of open source licensing is the GPL (Gnu Public License).

While many organizations have begun to adopt open source, others are more cautious. Some drawbacks to using open source are lack of vendor support, incompatibility with proprietary platforms, and inability to protect the organization’s intellectual property rights to systems built on open source. Because of these limitations, many organizations prefer to limit their use of open source to noncritical systems.

Procedures

Procedures are step-by-step instructions for performing a specific task or set of tasks. Like standards, procedures are often implemented to enforce policies or meet quality goals. Despite the fact that writing documentation can be one of a technical person’s least favorite activities, the importance of documenting security procedures cannot be overemphasized. When followed as written, procedures ensure consistent and repeatable results, provide instruction to those who are unfamiliar with how to perform a specific process, and provide assurance for management and auditors that policies are being enforced in practice. In addition, clear procedures often allow organizations to delegate routine functions to entry-level staff, or develop programs to automate these functions, freeing up more experienced practitioners to perform higher-level work. For example, account provisioning software has been implemented in many organizations to automate procedures that contain multiple steps such as establishing login credentials, home directories, assigning users to groups and roles, and the like. This software can be used by junior staff who lack the depth of knowledge and understanding of the systems configuration behind these procedures. Organizations justify the cost based on savings in salaries, an ability to free up senior staff to perform more complex activities, improvements in consistency and quality by reducing human error, and eliminating the manual effort needed to create an audit trail of account management activity.

When developing procedures, the security practitioner should not take the reader’s level of knowledge or skill for granted; instead, each step in the process should be explained in sufficient detail for someone who is unfamiliar with the process to be able to perform it independently. All technical terms should be defined, and acronyms should be spelled out. A procedure, like a good novel, has a beginning, middle, and an end. Typical components of a procedure are:

  • Purpose—The reason for performing the procedure, usually the desired outcome.
  • Applicability—Who is responsible for following the procedure, and in what circumstances the procedure is followed.
  • Steps—The detailed steps taken to perform the procedure.
  • Figures—Illustrations, diagrams, or tables used to depict a workflow, values to enter in specific fields, or display screen shots to show formats and to enhance ease of use.
  • Decision Points—Yes/no questions whose answers result in branching to different steps in the procedure. These may be written as steps in the procedure or included in a workflow diagram or decision tree.

The development of policies, standards, and processes is a daily occurrence for the security practitioner. Policies should be created with high-level organizational support and be written to last years if possible. Policies should direct users to standards and procedures. Standards and procedures should be written as specifically as possible and reference their parent policy when necessary. The role involved in keeping the standards and procedures updated must ensure the updates remain consistent with the policy and the technology environment of the organization.

Implementation and Release Management

Release management is a software engineering discipline that controls the release of applications, updates, and patches to the production environment. The goal of release management is to provide assurance that only tested and approved application code is promoted to production or distributed for use. Release management also seeks to meet timeliness goals, minimize disruption to users during releases, and ensure that all associated communication and documentation is issued with new releases of software. The most important role is that of the release manager. The release manager is responsible for planning, coordination, implementation, and communication of all application releases. This function may be situated within a unit of a quality assurance or operational group, or may be part of a separate organization responsible for overall change and configuration management. The decision of where to locate release management functions should be based on the need to achieve separation of duties and rigorous process oversight during application installation or distribution and ongoing maintenance. This is essential to mitigate risks and impacts of unplanned and malicious changes, which could introduce new vulnerabilities into the production environment or user community.

Release management policy specifies the conditions that must be met for an application or component to be released to production, roles and responsibilities for packaging, approving, moving, and testing code releases, and approval and documentation requirements.

The release management process actually begins with the QA testing environment. It is important to ensure that any defects found and corrected in QA are incorporated back into the system test environment, or previously corrected bugs may resurface. An organization may have separate user acceptance and preproduction or staging environments that are subject to release management before code is released to production. Typically the configuration and movement of objects into these environments are controlled by the release management team. Once user acceptance testing is complete, the application is packaged for deployment to the production or preproduction environment and the final package is verified. Automated build tools are typically used to ensure that the right versions of source code are retrieved from the repository and compiled into the application. In organizations that use automated deployment tools, builds are packaged together with automated installers (such as Windows MSI service for desktop operating systems) and other necessary components such as XML configuration and application policy files.

To ensure integrity of the source code or application package and to protect production libraries from the release of unauthorized code, applications may be hashed and signed with a digital signature created with a public key algorithm. Code signing, which is typically used for web applications such as those based on Java and ActiveX to assist users in validating that the application was issued by a trusted source, also has an application in release management. For example, Sun provides a jarsigner tool for Java JAR (Java Archive) and EAR (Enterprise Archive) files that allows an authorized holder of a private key to sign individual JAR files, record signed entries into a file called a Manifest, and convert the Manifest into a signature file containing the digest of the entire Manifest. This prevents any modifications to the archive file once it has been approved and signed. The public key used to verify the signature is packaged along with the archive file so that the person responsible for deploying the file can verify the integrity of the release, also using the jarsigner tool.

Release management tools aid in automating application deployments and enforcing an organization’s release policy. Such features include role-based access control to enforce separation of duties; approval checking and rejection of unapproved packages; component verification tools to ensure that all required application components, documentation, etc., are included in the release; rollback and demotion facilities to protect against incomplete deployments; and auditing and reporting tools to track all aspects of the release process. Automated tools may also be capable of verifying integrity by interrogating a digital signature.

Applications deployed into preproduction or production environments may be smoke tested as part of the release process. Smoke testing is high-level, scripted testing of the major application components and interfaces to validate the integrity of the application before making it publicly available.

The release manager ensures that all documentation and communication regarding the release are prepared and distributed before going “live” with a new or modified application. Any planned outages or other impacts should be communicated in advance, and contacts for assistance with the application should be made available. Following the release, a “burn in” period may be instituted in which special problem resolution and support procedures are in effect.

Release management policy is typically enforced through access control mechanisms that prevent developers from modifying production programs and data. Sensitive system utilities should reside in their own libraries and should only be executed by authorized personnel and processes. Utility programs such as compilers and assemblers should never be executed in production environments.

Systems Assurance and Controls Validation

Systems assurance is the process of validating that existing security controls are configured and functioning as expected, both during initial implementation and on an ongoing basis. Security controls should never be assumed to be functioning as intended. Human error, design issues, component failures, and unknown dependencies and vulnerabilities can impact the initial configuration of security controls. Even once properly implemented, controls can lose effectiveness over time. Changes in the control environment itself, in the infrastructure that supports the control, in the systems which the control was designed to protect, or in the nature of threats that seek to bypass controls all contribute to reduced control effectiveness. Even in the absence of known changes, a “set it and forget it” mentality can expose an organization to risk. Therefore, controls should be tested on a periodic basis against a set of security requirements.

Change Control and Management

Change control refers to the formal procedures adopted by an organization to ensure that all changes to system and application software are subject to the appropriate level of management control. Change control seeks to eliminate unauthorized changes, and reduce defects and problems related to poor planning and communication of changes. Change control is often enforced through use of a Change Control Board, who reviews changes for impact, ensures that the appropriate implementation and backout plans have been prepared, and follows changes through approval and post-implementation review.

The change control policy document covers the following aspects of the change process under management control:

  1. Request Submission—A request for change is submitted to the Change Control Board for review, prioritization, and approval. Included in the request should be a description of the change and rationale or objectives for the request, a change implementation plan, impact assessment, and a backout plan to be exercised in the event of a change failure or unanticipated outcome.
  2. Recording—Details of the request are recorded for review, communication, and tracking purposes.
  3. Analysis/Impact Assessment—Changes are typically subject to peer review for accuracy and completeness, and to identify any impacts on other systems or processes that may arise as a result of the change.
  4. Decision Making and Prioritization—The team reviews the request, implementation and backout plans, and impacts and determines whether the change should be approved, denied, or put on hold. Changes are scheduled and prioritized, and any communication plans are put in place.
  5. Approval—Formal approval for the change is granted and recorded.
  6. Status Tracking—The change is tracked through completion. A post-implementation review may be performed.

Systems experience frequent changes. Software packages are added, removed, or modified. New hardware is introduced, while legacy devices are replaced. Updates due to flaws in software are regular business activities for system managers. The rapid advancement of technology, coupled with regular discovery of vulnerabilities, requires proper change control management to maintain the necessary integrity of the system. Change control management is embodied in policies, procedures, and operational practices.

Maintaining system integrity is accomplished through the process of change control management. A well-defined process implements structured and controlled changes necessary to support system integrity and accountability for changes. Decisions to implement changes should be made by a committee of representatives from various groups within the organization such as ordinary users, security, system operations, and upper-level management. Each group provides a unique perspective regarding the need to implement a proposed change. Users have a general idea of how the system is used in the field. Security can provide input regarding the possible risks associated with a proposed change. System operations can identify the challenges associated with the deployment and maintenance of the change. Management provides final approval or rejection of the change based on budget and strategic directions of the organization. Actions of the committee should be documented for historical and accountability purposes.

The change management structure should be codified as an organization policy. Procedures for the operational aspects of the change management process should also be created. Change management procedures are forms of directive controls. The following subsections outline a recommended structure for a change management process.

  1. Requests—Proposed changes should be formally presented to the committee in writing. The request should include a detailed justification in the form of a business case argument for the change, focusing on the benefits of implementation and costs of not implementing.
  2. Impact Assessment—Members of the committee should determine the impacts to operations regarding the decision to implement or reject the change.
  3. Approval/Disapproval—Requests should be answered officially regarding their acceptance or rejection.
  4. Build and Test—Subsequent approvals are provided to operations support for test and integration development. The necessary software and hardware should be tested in a nonproduction environment. All configuration changes associated with a deployment must be fully tested and documented. The security team should be invited to perform a final review of the proposed change within the test environment to ensure that no vulnerabilities are introduced into the production system. Change requests involving the removal of a software or a system component require a similar approach. The item should be removed from the test environment and have a determination made regarding any negative impacts.
  5. Security Impact Assessment—A security impact assessment is performed to determine the impact of the proposed change to confidentiality, integrity, or availability. Should the change introduce risk, the security impact assessment should qualify and quantify the risk as much as possible and provide mitigation strategies.
  6. Notification—System users are notified of the proposed change and the schedule of deployment.
  7. Implementation—The change is deployed incrementally, when possible, and monitored for issues during the process.
  8. Validation—The change is validated by the operations staff to ensure that the intended machines received the deployment package. The security staff performs a security scan or review of the affected machines to ensure that new vulnerabilities are not introduced. Changes should be included in the problem tracking system until operations has ensured that no problems have been introduced.
  9. Documentation—The outcome of the system change, to include system modifications and lessons learned, should be recorded in the appropriate records. This is the way that change management typically interfaces with configuration management.

Change management can involve several different roles, each with their own responsibilites. The division of these roles can vary from organization to organization, but here are some of the common major roles:

  • Change Manager—Individual in charge of CM policies and procedures, including mechanisms for requesting, approving, controlling, and testing changes.
  • Change Control Board—Responsible for approving system changes.
  • Project Manager—Manages budgets, timelines, resources, tasks, and risk for systems development, implementation, and maintenance.
  • Architects—Develop and maintain the functional and security context and technical systems design.
  • Engineers and Analysts—Develop, build, and test system changes, and document the rationale for and details of the change.
  • Customer—Requests changes and approves functional changes in the design and execution of a system.
  • System Security Officer—Ensures planned changes do not have adverse security impacts by performing security impact assessments for each change. The system security officer is also responsible for assisting the system's owner with updating relevant security documentation.

Configuration Management

Throughout the system life cycle, changes made to the system, its individual components, or its operating environment can introduce new vulnerabilities and thus impact this security baseline. Configuration management (CM) is a discipline that seeks to manage configuration changes so that they are appropriately approved and documented, so that the integrity of the security state is maintained, and so that disruptions to performance and availability are minimized. Unlike change control, which refers to the formal processes used to ensure that all software changes are managed, configuration management refers to the technical and administrative processes that maintain integrity of hardware and system software components across versions or releases.

Typical steps in the configuration management process are: change request, approval, documentation, testing, implementation, and reporting. A configuration management system consisting of a set of automated tools, documentation, and procedures is typically used to implement CM in an organization. The system should identify and maintain:

  • Baseline hardware, software, and firmware configurations
  • Design, installation, and operational documentation
  • Changes to the system since the last baseline
  • Software test plans and results

The configuration management system implements the four operational aspects of CM: identification, control, accounting, and auditing.

Organizational hardware and software require proper tracking, implementation testing, approvals, and distribution methods. Configuration management is a process of identifying and documenting hardware components, software, and the associated settings. A well-documented environment provides a foundation for sound operations management by ensuring that IT resources are properly deployed and managed. The security professional plays an important role in configuration management through the identification and remediation of control gaps in current configurations.

Detailed hardware inventories are necessary for recovery and integrity purposes. Having an inventory of each workstation, server, and networking device is necessary for replacement purposes in the event of facility destruction. All devices and systems connected to the network should be in the hardware list. At a minimum, configuration documentation should include in the hardware list the following information about each device and system:

  • Make
  • Model
  • MAC addresses
  • Serial number
  • Operating system or firmware version
  • Location
  • BIOS and other hardware-related passwords
  • Assigned IP address if applicable
  • Organizational property management label or bar code

Software is a similar concern and a software inventory should minimally include:

  • Software name
  • Software vendor (and reseller if appropriate)
  • Keys or activation codes (note if there are hardware keys)
  • Type of license and for what version
  • Number of licenses
  • License expiration
  • License portability
  • Organizational software librarian or asset manager
  • Organizational contact for installed software
  • Upgrade, full, or limited license

The inventory is also helpful for integrity purposes when attempting to validate systems, software and devices on the network. Knowing the hardware versions of network components is valuable from two perspectives. First, the security professional will be able to quickly find and mitigate vulnerabilities related to the hardware type and version. Most hardware vulnerabilities are associated with a particular brand and model of hardware. Knowing the type of hardware and its location within the network can substantially reduce the effort necessary to identify the affected devices. Additionally, the list is invaluable when performing a network scan to discover unauthorized devices connected to the network. A new device appearing on a previously documented network segment may indicate an unauthorized connection to the network.

A configuration list for each device should also be maintained. Devices such as firewalls, routers, and switches can have hundreds or thousands of configuration possibilities. It is necessary to properly record and track the changes to these configurations to provide assurance for network integrity and availability. These configurations should also be periodically checked to make sure that unauthorized changes have not occurred.

Operating systems and applications also require configuration management. Organizations should have configuration guides and standards for each operating system and application implementation. System and application configuration should be standardized to the greatest extent possible to reduce the number of issues that may be encountered during integration testing. Software configurations and their changes should be documented and tracked with the assistance of the security practitioner. It is possible that server and workstation configuration guides will change frequently due to changes in the software baseline.

Identification

Identification captures and maintains information about the structure of the system, usually in a configuration management database (CMDB). Each component of the system configuration should be separately identified and maintained as a configuration item (CI) within the CMDB using a unique identifier (name), number (such as a software or hardware serial number), and version identifier. The CMDB may be a series of spreadsheets or documents, or may be maintained within a structured database management system (DBMS). Use of structured databases is preferred to enforce consistency and maintain the integrity of information (such as preventing duplicate entries and preserving associations between CIs) and to safeguard against unauthorized modifications and deletions.

Within the CMDB, changes are tracked by comparing the differences between a CI before and after the change in a change set or delta. The CMDB thus is capable of storing the baseline configuration plus a sequence of deltas showing a history of changes. In addition, the system must maintain a consistent mapping among components so that changes are appropriately propagated through the system. Dependencies between components are identified so that the impacts of logical changes to any one component are known.

Automated Configuration Management Tools

Many in-house software development teams use automated tools for software version change control and other aspects of configuration management. Most development platforms include features such as source code comparators, comment generators, and version checkers. When linked to a central repository, these tools use check in/check out functions to copy code from the repository into a development library or desktop environment, make and test modifications, and place the modified code back into the repository. Branching and merging tools help resolve concurrency conflicts when two or more individuals modify the same component. Standalone or add-on tools are available commercially or as open source, and typically contain more robust functionality suited to teams of developers. Tool vendors do not always distinguish between features that manage the CM process, and those that manage actual configurations. Datacenter CM tools, for example, range from standalone CMDBs to full suites that include workflow engines, access control, policy enforcement, and reporting capabilities.

Control

All configuration changes and releases must be controlled through the life cycle. Control mechanisms are implemented to govern change requests, approvals, change propagation, impact analysis, bug tracking, and propagation of changes. Control begins early in systems design, and continues throughout the system life cycle. Before changes are implemented, they should be carefully planned and subjected to peer review. Implementation and rollback plans (in case of change failure) should accompany the change request. Technical controls to enforce this aspect of CM include access control for development, test, and production environments, as well as to the CMDB itself.

Accounting

Accounting captures, tracks, and reports on the status of CIs, change requests, configurations, and change history.

Auditing

Auditing is a process of logging, reviewing, and validating the state of CIs in the CMDB, ensuring that all changes are appropriately documented and that a clear history of changes is retained in such a way that they can be traced back to the person making the change and provide detail on the delta (difference) between the baseline and the current state of the system. Auditing also compares the information in the CMDB to the actual system configuration to ensure that the representation of the system is complete and accurate, and association between components is maintained.

Security Impact Assessment

Security impact assessment is the analysis conducted by qualified staff within an organization to determine the extent to which changes to the information system affect the security posture of the system. Because information systems are typically in a constant state of change, it is important to understand the impact of changes on the functionality of existing security controls and in the context of organizational risk tolerance. Security impact analysis is incorporated into the documented configuration change control process.

The analysis of the security impact of a change occurs when changes are analyzed and evaluated for adverse impact on security, preferably before they are approved and implemented, but also in the case of emergency/unscheduled changes. Once the changes are implemented and tested, a security impact analysis (and/or assessment) is performed to ensure that the changes have been implemented as approved, and to determine if there are any unanticipated effects of the change on existing security controls.

Security impact analysis supports the implementation of NIST SP 800-53r4 control CM-4 Security Impact Analysis.

System Architecture/Interoperability of Systems

Interoperability describes the extent to which systems and devices can exchange data, and interpret that shared data. For two systems to be interoperable, they must be able to exchange data and subsequently present that data such that it can be understood by a user. If two or more systems are capable of communicating and exchanging data, they are exhibiting syntactic interoperability. Specified data formats, communication protocols, and the like are fundamental. XML or SQL standards are among the tools of syntactic interoperability. Syntactical interoperability is a necessary condition for further interoperability. Beyond the ability of two or more computer systems to exchange information, semantic interoperability is the ability to automatically interpret the information exchanged meaningfully and accurately in order to produce useful results as defined by the end users of both systems. To achieve semantic interoperability, both sides must refer to a common information exchange reference model. The content of the information exchange requests are unambiguously defined: what is sent is the same as what is understood.

With respect to software, the term interoperability is used to describe the capability of different programs to exchange data via a common set of exchange formats, to read and write the same file formats, and to use the same protocols. (The ability to execute the same binary code on different processor platforms is not contemplated by the definition of interoperability.) The lack of interoperability can be a consequence of a lack of attention to standardization during the design of a program. Indeed, interoperability is not taken for granted in the non–standards-based portion of the computing world.

According to ISO/IEC 2382-01, “Information Technology Vocabulary, Fundamental Terms,” interoperability is defined as follows: “The capability to communicate, execute programs, or transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units”.20

Patch Management

The application of software and firmware patches to correct vulnerabilities is a critical component of vulnerability and configuration management practices. Most security breaches that have occurred over the past decade are not the result of the so-called zero-day attacks, but rather were perpetrated by attackers exploiting known vulnerabilities. The SQL Slammer worm, which exploited a buffer overflow vulnerability in Microsoft’s SQL server and desktop engines, cost between $950 million and $1.2 billion in lost productivity and denial of service during its first five days in the wild. Yet not seven months later, Blaster arrived on the scene to wreck similar havoc. These attacks could have been prevented by timely and effective application of patches that were already available to administrators—so why did not the affected organizations keep up to date on patches? The answer is that patching, and patching distributed desktop and laptop systems in particular, is not a straightforward process. Vulnerabilities can target a number of systems, including desktop and server operating systems; database management systems; client software such as browsers and office productivity software; and network devices such as routers, switches, and firewalls. The sheer volume of vendor patches to be deployed across these disparate systems necessitates an automated solution that accommodates an organization’s core platforms. The patches themselves must be acquired, tested, distributed, and verified in a coordinated and controlled manner, which means processes must be designed and followed religiously to ensure effectiveness. Application of patches can be disruptive to operations, slowing down systems or making them unavailable during the installation window, and often requiring reboot or restart after installation, and some can be “bad,” meaning that they introduce new vulnerabilities, create downstream impacts, or do not deploy correctly on all target systems. Not all patches are of equal criticality to an organization, meaning that someone must make a decision regarding when and why to deploy patches as they are made available. This is typically done through the organization’s change control system.

Despite these obstacles, an organization must adopt some form of patch management discipline to mitigate vulnerabilities. Decisions regarding when, what, and how to patch should not be left up to individual administrators, but should be backed by a formal patch management policy or process and carried out by a specifically designated team or committee. The policy should identify roles and responsibilities, criteria for determining whether and how to deploy patches, and service-level objectives for fully deploying patches at each criticality level. The patch management process includes the following steps:

  • Acquisition—Patches are most often supplied via download from the vendor’s website. Some patch distribution and management systems may automatically scan these sites for available patches and initiate downloads to a centralized, internal site.
  • Testing—Patches must be tested to ensure that they can be correctly distributed and installed, and that they do not interfere with normal system or application functioning. Despite a vendor’s best efforts, patches are often created under pressure to fix critical vulnerabilities and may not be thoroughly regression tested. Furthermore, each organization’s operating environment is unique, and it is impossible to test these myriad variations, so impacts on dependent services and applications and compatibility with all possible configurations are not always identified during vendor testing. Patches should initially be tested in a laboratory environment that contains replicas of standard target machine configurations. A limited pilot deployment may then be used for further testing in the production environment.
  • Approval—Not all patches will be immediately approved for deployment. Noncritical patches and patches that are not applicable to the platforms and services used in the organization may be deferred to a later date, or to a time when they are included in a more comprehensive vendor update. Patches that cannot be deployed via standard means or those that cause issues on test machines may require further planning and testing before they are approved. The approval process should include provisions for emergency deployments of critical security patches.
  • Packaging—Patches must be packaged or configured for distribution and installation on target systems. Depending on how patches are deployed, packaging can take several forms. Some platforms such as Windows provide installation software or scripts that are bundled with the patch and automatically invoked when distributed. Custom scripts can also be written to execute a series of installation actions. Patch management software typically includes facilities to package as well as deploy patches.
  • Deployment—Having an accurate inventory of machines and their current patch levels is critical to successful deployment of patches. Automated patch management and software deployment tools may maintain an independent inventory or CMDB, or may integrate with third-party configuration and asset management software. Deployment features include scheduling, user notification of patch and reboot (with or without a “snooze” option), and ordering options for multiple-patch deployments.
  • Verification—Automated patch management tools should be able to verify correct application of patches and report all successful and unsuccessful deployments back to a centralized console or reporting engine.

To minimize user disruption during the workday, organizations can purchase Wake-on-LAN (WOL) compliant network cards, now standard on most end-user computers and servers. These cards respond to wake-up transmissions (called magic packets) from a centralized configuration management server that are sent before distributing the patch. The server will send the transmission only to those systems that require the scheduled patch updates.

Monitoring System Integrity

A comprehensive configuration and change management program should include a mechanism to monitor or periodically validate changes to system configuration. Sophisticated integrity monitors such as Tripwire integrate with the organization’s CMDB to produce a detailed history of system changes. Integrity checkers work by taking a “snapshot” of the approved system configuration, including UNIX object properties and Windows registry keys, access control lists, and contents of system configuration files. This snapshot is then hashed and cryptographically signed to protect against modification. Periodically, the snapshot is compared to a hash of the current configuration, and any changes are reported back to the administrator or noted directly in the CMDB, if an automated interface exists.

Integrity checkers such as Tripwire do not necessarily record who made the change, or prevent unauthorized changes from occurring. Use of additional protections such as host-based IPS and log collection and correlation is recommended to supplement integrity checking functions.

Security Awareness and Training

Common sense tells us that the security posture of any organization is only as strong as its weakest link. Increased focus on technical and administrative safeguards in the wake of data security breaches, international terrorism, and large-scale fraud and abuse have improved the situation, but many organizations still fail to consider the human element. Basel III defines operational risk, of which information security risk is a component, as “the risk of direct or indirect loss resulting from inadequate or failed internal processes, people, and systems or from external events.” 21

Security Awareness

Security awareness seeks to reduce the risk related to human error, misjudgment, and ignorance by educating people about the risks and threats to confidentiality, integrity, and availability, and how they can help the organization be more resistant to threats in the performance of their daily job functions. Many national and international regulatory and standards bodies recognize the importance of security awareness and awareness training by making them security program requirements. Critical success factors for any security awareness program are:

  • Senior Management Support—Security success stories happen when individuals begin to treat security as part of their job function. Too many security programs fail because senior management does not buy in to the security team’s mission and message. To get this buy-in, start your awareness program at the top. Involve senior management in the design and oversight of the program, and tie awareness program goals to business goals.
  • Cultural Awareness—There is no such thing as a “one size fits all” awareness program. Is your organization a large, established, hierarchical institution or an agile, high-tech, entrepreneurial firm? Are workers unionized, or independent professionals? Does your organization value customer service, operational efficiency, or personal achievement? These and other questions help define your target audience and deliver a message whose content, style, and format are designed to have impact on your specific audience.
  • Communication Goals—Set communication goals and build a strategy to meet these goals. Perform a needs assessment to identify gaps in security awareness and develop objectives to close these gaps. Be as specific as you can when stating your goals. Do you intend to alert users to social engineering threats? Communicate policy? Teach people how to spot and report incidents? Your objectives will dictate how your awareness program is delivered.
  • Taking a Change Management Approach—The end goal of awareness programs is to produce changes in behavior in your target audience. Understanding barriers to change and methods that successfully stimulate people to change will help you reach this goal. People change because they are motivated to change. Motivators include small prizes and awards, financial incentives, time off, peer recognition, feelings of personal pride and accomplishment, feelings of job competency, and just plain fun. Organizations that tie security awareness to their formal system of salary and performance management are the most likely to foster interest in security issues and compliance with expectations. Promoting awareness that “hits home” by spotlighting issues such as identity theft, spyware and malicious code, online shopping safety, and protection of children on the Internet and tying these to workplace issues is an effective way to capture employee interest.
  • Measurement—Measuring success against stated objectives not only helps justify the awareness program to senior management, but will allow you to identify gaps and continuously improve on your delivery.

General security awareness differs from awareness training, in that awareness is designed to get people’s attention while training instructs people on practices they can adopt to identify, respond to, and protect against security threats. Some specific vehicles for delivering general security awareness include:

  • Threat alerts distributed by e-mail
  • Security-specific newsletters or articles in your company’s newsletter
  • Security awareness Intranet sites
  • Screen savers and computer wallpaper
  • Posters and notices in prominent locations
  • Brochures or pamphlets

Awareness efforts should also focus on user responsibilities for promoting ethical practices and a productive work environment. RFC 1087, “Ethics and the Internet,” promotes personal responsibility and provides sound general principles for behavior when using computing resources. These include prohibitions against unauthorized access, wasteful use of resources, and violations of confidentiality, integrity, and availability of information and systems.

Awareness training is typically more formal in nature and produces more directly measurable results. It is a good idea to make security awareness training a mandatory annual or semiannual event by partnering with Human Resources or training areas. Some organizations require specific training on security policies and procedures and appropriate use of information systems, and may maintain a record of attendance and formal, signed acknowledgment that training has been received. Training can be general or it can focus on specific areas such as:

  • Labeling and handling of sensitive information
  • Appropriate use policies for e-mail, Internet, and other services
  • Customer privacy laws, policies, and procedures
  • Detecting and reporting security incidents
  • Protecting intellectual property and copyright

Training typically addresses issues that are specific to the work environment and provides explicit instruction on policies, standards, and procedures. Training should be required for employees, contractors, and third parties that use or manage the organization’s information systems assets. Instructors providing security awareness training should be well versed in the security domain as well as related policies and procedures.

To measure the effectiveness of security awareness training, consider using surveys or quizzes that test knowledge of key issues before and after training. Other tests for improved awareness include password analyzers or crackers that test the strength of user-selected passwords, number of incidents reported by personnel using established procedures, number of security policy violations, or number of help desk calls or system-related issues due to malicious code, social engineering attacks, etc. You should determine what metrics you will use when designing your awareness program. Results should be reported to senior management to ensure their continued support.

Security Staff Training

Personnel with specific security job responsibilities must have specialized knowledge and skills in traditional security domains as well as the specific tools, technologies, and practices used on the job. Training should begin by identifying roles and responsibilities and determining the specific knowledge and skills needed to perform security functions. Training should cover the basics of the seven SSCP domains, and offer continuing advancement in each of these specialized areas:

  • Access controls
  • Analysis and monitoring
  • Cryptography
  • Malicious code
  • Networks and telecommunications
  • Risk, response, and recovery
  • Security operations and administration

In addition, training in the specific industry regulations, laws, and standards applicable to the security practitioner’s role in the organization should be included in the curriculum.

Most organizations do not have the capacity to provide specialized professional training in these areas, and must look outside the organization for assistance. When selecting a security training provider, take care to ensure that the company employs only highly qualified, experienced trainers who will be prepared to explore topics in depth and provide answers to technical questions. A number of organizations provide security training and training programs and general and advanced security certifications. Some of them (this is by no means an all-inclusive list) are:

  • (ISC)2 SSCP, CISSP, and CAP—Review seminars conducted by (ISC)2 and Authorized Education Affiliates (see http://www.isc2.org for more details).
  • CPP–ASIS–International—Offers the Certified Protection Professional (CPP), Professional Certified Investigator (PCI), and Physical Security Professional (PSP) credentials. Information on certification review courses and domain-specific classroom and e-learning courses are available at https://www.asisonline.org/Certification/Board-Certifications/Pages/default.aspx.
  • CISA, CISM—Certified Information Systems Auditor and Certified Information Security Manager certifications are offered by ISACA (Information Systems Audit and Control Association), as well as security courses and conference programs. More information can be found athttps://www.isaca.org/Pages/default.aspx.

Interior Intrusion Detection Systems

Within the facility, it is still necessary to maintain levels of security. The layered approach provides for additional security measures while inside the perimeter of the facility. Specifically, not all employees need access to the sensitive areas, such as the phone closets, or need access into the data center. It is not practical or economical to have guards stationed at every security point within the facility; however, an access control system can provide the necessary security controls throughout the building.

A card reader can control access into a specific room. This can be controlled through the access control software, which will be maintained within the security control center. If the individual has access to the room, the employee will place his badge up to the reader and it will release the electric lock and allow entry.

Other elements necessary for this control of interior access are described in the following sections.

Balanced Magnetic Switch (BMS)

This device uses a magnetic field or mechanical contact to determine if an alarm signal is initiated. One magnet will be attached to the door and the other to the frame; when the door is opened the field is broken. A BMS differs from standard magnetic status switches in that a BMS incorporates two aligned magnets with an associated reed switch. If an external magnet is applied to the switch area, it upsets the balanced magnetic field such that an alarm signal is received. Standard magnetic switches can be defeated by holding a magnet near the switch. Mechanical contacts can be defeated by holding the contacts in the closed position with a piece of metal or taping them closed. Balanced magnetic switches are not susceptible to external magnetic fields and will generate an alarm if tampering occurs. These switches are used on doors and windows (Figure 2-4).

c02f004.tif

Figure 2-4: Balanced Magnetic Switch (BMS), used on doors and windows, uses a magnetic field or mechanical contact to determine if an alarm signal is initiated.

by SparkFun Electronics, www.flickr.com/photos/sparkfun/15972010413/, Creative Commons Attribution 2.0 Generic, creativecommons.org/licenses/by/2.0/

Motion-Activated Cameras

A fixed camera with a video motion feature can be used as an interior intrusion point sensor. In this application, the camera can be directed at an entry door and will send an alarm signal when an intruder enters the field of view. This device has the added advantage of providing a video image of the event, which can alert the security officer monitoring the camera and he can make a determination of the need to dispatch a security force. Typically one camera can be associated with several doors along a hallway. If a door is forced open the alarm will trigger the camera to begin recording and can give the monitoring officer a video view starting one minute before the alarm was tripped, so as to allow the operator all the possible information before dispatching a security response. This system uses technology to supplement the guard force. It can activate upon motion and can give a control center operator a detailed video of actual events during alarm activation.

Acoustic Sensors

This device uses passive listening devices to monitor building spaces. An application is an administrative building that is normally only occupied in daylight working hours. Typically, the acoustic sensing system is tied into a password-protected building entry control system, which is monitored by a central security monitoring station. When someone has logged into the building with a proper password, the acoustic sensors are disabled. When the building is secured and unoccupied, the acoustic sensors are activated. After hours intruders make noise, which is picked up by the acoustic array and an alarm signal is generated. The downside is the false alarm rate from picking up noises such as air conditioning and telephone ringers. This product must be deployed in an area that will not have any noise. Acoustic sensors act as a detection means for stay-behind covert intruders. One way to use the system is as a monitoring device: when it goes into alarm the system will open up an intercom and the monitoring officer can listen to the area. If no intruder is heard then the alarm is cancelled.

Infrared Linear Beam Sensors

Many think of this device from spy movies, where the enduring image of secret agents and bank robbers donning their special goggles to avoid triggering an active infrared beam is recalled. This is the device found in many homes on garage doors. A focused infrared (IR) light beam is projected from an emitter and bounced off of a reflector that is placed at the other side of the detection area (Figure 2-5). A retroreflective photoelectric beam sensor built into the emitter detects when the infrared beam is broken by the passing of a person or the presence of an object in the path of the infrared beam. If the beam is broken, the door will stop or the light will come on. This device can also be used to notify security of individuals in hallways late at night, when security is typically at its reduced coverage.

Passive Infrared (PIR) Sensors

A PIR sensor (Figure 2-6) is one of the most common interior volumetric intrusion detection sensors. It is called passive because there is no beam. A PIR picks up heat signatures (infrared emissions) from intruders by comparing infrared receptions to typical background infrared levels. Infrared radiation exists in the electromagnetic spectrum at a wavelength that is longer than visible light. It cannot be seen, but it can be detected. Objects that generate heat also generate infrared radiation, and those objects include animals and the human body. The PIR is set to determine a change in temperature, whether warmer or colder, and distinguish an object that is different from the environment that it is set in. Typically, activation differentials are three degrees Fahrenheit. These devices work best in a stable, environmentally controlled space.

c02f005.tif

Figure 2-5: Infrared linear beam sensors

c02f006.tif

Figure 2-6: A passive infrared (PIR) sensor is one of the most common interior volumetric intrusion detection sensors. Because there is no beam, it is called passive.

by Z22, commons.wikimedia.org/wiki/File:Light_switch_with_passive_infrared_sensor.jpg, Attribution-ShareAlike 4.0 International, creativecommons.org/licenses/by-sa/4.0/deed.en

A PIR is a motion detector and will not activate for a person who is standing still because the electronics package attached to the sensor is looking for a fairly rapid change in the amount of infrared energy it is seeing. When a person walks by, the amount of infrared energy in the field of view changes rapidly and is easily detected. The sensor should not detect slower changes, like the sidewalk cooling off at night.

PIRs come in devices that project out at a 45° angle and can pick up objects 8 to 15 meters away. There are also 360° PIRs, which can be used in a secured room, so when there is entry, the PIR will activate. These motion detection devices can also be programmed into an alarm key pad located within the protected space. When motion is detected, it can be programmed to wait for a prescribed time while the individuals swipe their badge or enter their pass code information into the keypad. If identification is successful, the PIR does not send an intruder notification to the central station.

While not only a security application, PIRs are often used as an automatic request to exit (REX) device for magnetically locked doors. In this application, the REX (Figure 2-7) acts as the automatic sensor for detecting an approaching person in the exit direction for magnetically locked doors and deactivates the alarm.

c02f007.tif

Figure 2-7: An automatic request to exit (REX) device (located over the Exit sign) provides for magnetically locked doors, acting as an automatic sensor for detecting an approaching person in the exit direction and deactivates the alarm as the person exits.

Dual-Technology Sensors

These provide a common sense approach for the reduction of false alarm rates. For example, this technology uses a combination of microwave and PIR sensor circuitry within one housing. An alarm condition is only generated if both the microwave and the PIR sensor detect an intruder. Since two independent means of detection are involved, false alarm rates are reduced when configured into this setting. Integrated, redundant devices must react at the same time to cause an alarm. More and more devices are coming with dual-technology that will reduce the need for multiple devices and will significantly reduce the false alarm rates.

Escort and Visitor Control

All visitors entering the facility should sign in and sign out on a visitor’s log to maintain accountability of who is in the facility, the timeframe of the visit, who they visited, and in the case of an emergency, have accountability of everyone for safety purposes.

All visitors should be greeted by a knowledgeable receptionist who in turn will promptly contact the employee that they are there to visit or meet with. There should be some type of controlled waiting area within the lobby so the receptionist can keep track of the visitor and can direct the employee to them, in the event they have never met previously.

Visitors are given temporary badges, but this badge does not double as an access card. The temporary badge will be issued at an entry control point only after the visitor identifies the purpose of the visit and receives approval by the employee being visited. In some organizations, only certain employees may approve visitor access along with the day and time of the visit. In many operations, the visitor is escorted at all times while inside the facility. When the visitor arrives, he will present a form of photo identification, such as a driver’s license, to the receptionist for verification. Some visitor badges are constructed of paper and may have a feature that causes a void line to appear after a preset time period. Typically, the pass is dated and issued for a set period, usually one day. In most cases, a visitor will wear a conspicuous badge that identifies him or her as a visitor and clearly indicates whether an escort is required (often done with color-coded badges). If an escort is required, the assigned person should be identified by name and held responsible for the visitor at all times while on the premises. A visitor management system can be a pen and paper system that records basic information about visitors to the facility. Typical information found in an entry includes the visitor’s name, reason for the visit, date of visit, and the check in and check out times.

Other types of visitor management systems use a computer-based system or specific visitor software product. They can either be manually inserted into the system by the receptionist or, on a higher-end visitor management system, the visitor provides the receptionist with identification, such as a driver’s license or a government or military ID. The receptionist then swipes the person’s identification through a reader. The system automatically populates the database with ID information and recognizes whether the ID is properly formatted or false. The receptionist who is registering the guest identifies the group to which the person belongs—guest, client, vendor, or contractor. Then the badge is printed.

It is best for the employee to come to the lobby area and greet the visitor personally. This is more than a common courtesy because it provides the necessary security in proper identification, escorting, and controlling the movement of the visitor. Some companies initiate a sound security practice by properly identifying the visitor and signing him or her into a visitor management system, but then they allow the visitor to wander the halls of the company trying to find his or her contact. This completely defeats the prior work of identifying and badging the visitor.

Building and Inside Security

Securing the perimeter and interior of a building is always a high priority. There is a wide array of door-securing technologies available. Beyond that, safes, vaults, and containers are important. All of these security measures are almost meaningless without a good key control system in place. These aspects of building security are all discussed in the following sections.

Doors, Locks, and Keys

Door assemblies include the door, its frame, and anchorage to the building. As part of a balanced design approach, exterior doors should be designed to fit snugly in the doorframe, preventing crevices and gaps, which also helps prevent many simple methods of gaining illegal entry. The doorframe and locks must be as secure as the door in order to provide good protection.

Perimeter doors should consist of hollow steel doors or steel-clad doors with steel frames. Ensure the strength of the latch and frame anchor equals that of the door and frame. Permit normal egress through a limited number of doors, if possible, while accommodating emergency egress. Ensure that exterior doors into inhabited areas open outward. Locate hinges on the interior of restricted areas. Use exterior security hinges on doors opening outward to reduce their vulnerability.

If perimeter doors are made of glass, make sure that the material is constructed of a laminate material or stronger. Ensure that glass doors only allow access into a public or lobby area of the facility. High security doors will then need to be established within the lobby area where access will be controlled. All doors that are installed for sensitive areas such as telephone closets, network rooms, or any area that has access control will require the door to have an automatic door closing device.

Electric Locks

The electric lock is a secure method to control a door. An electric lock actuates the door bolt. For secure applications, dual locks can be used. In some cases, power is applied to engage the handle, so the user can retract the bolt instead of the electric lock door operator actually retracting the bolt. Most electric locks can have built-in position switches and request-to-exit hardware. Although offering a high security level, electric locks are expensive. A special door hinge that can accommodate a wiring harness and internal hardware to the door is required. For retrofit applications, electric locks usually require the purchase of a new door.

Electric Strikes

The difference between an electric strike and an electric lock is in the mechanism that is activated at the door. In an electric-lock door, the bolt is moved. In an electric-strike door, the bolt remains stationary and the strike is retracted. As in electric locks, electric strikes can be configured for fail-safe or fail-secure operation. The logic is the same. In fail-safe configuration, the strike retracts when de-energized on loss of power. This allows the door to be opened from the public side. In fail-secure configuration, the strike remains in place, causing the door to be locked from the public side requiring manual key entry to unlock the door from the public side. Again, as with electric locks, unimpeded access is allowed for in the direction of exit by manual activation of the door handle or lever when exiting from the secure side. For retrofit situations, electric strikes rarely require door replacement and can often be done without replacing the doorframe.

Magnetic Locks

The magnetic lock is popular because it can be easily retrofitted to existing doors (Figure 2-8). The magnetic lock is surface-mounted to the door and doorframe. Power is applied to magnets continuously to hold the door closed. Magnetic locks are normally fail-safe, but do have a security disadvantage. In requirements for the U. S. Life Safety Codes, doors equipped with magnetic locks are required to have one manual device (emergency manual override button) and an automatic sensor (typically a passive infrared sensor (PIR) or request to exit (REX) device) to override the door lock signal when someone approaches the door in the exit direction.22 All locks are controlled by a card reader that, when activated, will release the secured side portion of the door and allow entry into the facility. While enhancing overall building safety, the addition of these extra devices allows possible compromise of the door lock. In the scenario, where a REX is used with magnetic locks, it not only turns off the alarm when the individual exits but also deactivates the locking device. This can be a problem if an adversary can get something through or under the door to cause the REX to release the magnetic lock.

c02f008.tif

Figure 2-8: Magnetic Lock

Anti-Passback

In high security areas, a card reader is utilized on both entry and exit sides of the door. This keeps a record of who went in and out. Anti-passback is a strategy where a person must present a credential to enter an area or facility and then again use the credential to “badge out.” This makes it possible to know how long a person is in an area and to know who is in the area at any given time. This requirement also has the advantage of instant personnel accountability during an emergency or hazardous event. Anti-passback programming prevents users from giving their cards or PIN number to someone else to gain access to the restricted area. In a rigid anti-passback configuration, a credential or badge is used to enter an area and that same credential must be used to exit. If a credential holder fails to properly badge-out, entrance into the secured area can be denied.

Turnstiles and Mantraps

A common and frustrating loophole in an otherwise secure ACS can be the ability of an unauthorized person to follow through a checkpoint behind an authorized person, called “piggybacking” or “tailgating.”

The traditional solution is an airlock-style arrangement called a “mantrap,” in which a person opens one door and waits for it to close before the next door will open (Figure 2-9). A footstep-detecting floor can be added to confirm there is only one person passing through. A correctly constructed mantrap or portal will provide for tailgate detection while it allows roller luggage, briefcases, and other large packages to pass without causing nuisance alarms. People attempting to enter side-by-side are detected by an optional overhead sensing array. The mantrap controller prevents entry into secured areas if unauthorized access is attempted.

c02f009.tif

Figure 2-9: A mantrap

Another system that is available is a turnstile, which can be used as a supplemental control to assist a guard or receptionist while controlling access into a protected area. Anyone who has gone to a sporting event has gone through a turnstile. In this approach, the individual’s badge is used to control the turnstile arm and allow access into the facility (Figure 2-10).

c02f010.tif

Figure 2-10: A turnstile can be used as a supplemental control to assist a guard or receptionist while controlling access into a protected area.

A higher end turnstile is an optical turnstile, which is designed to provide a secure access control in the lobby of a busy building. This system is designed as a set of parallel pedestals that form lanes, which allow entry or exit. Each barrier is equipped with photoelectric beams, guard arms, and a logic board (Figure 2-11).

To gain access to the interior of the building, an authorized person uses his access card at the optical turnstile. When the access card is verified, the guard arm is dropped, the photoelectric beam is temporarily shut off, and the cardholder passes without creating an alarm. The concept behind these options is to create a secure perimeter just inside the building to ensure only authorized people proceed further into the building, thereby creating the secure working environment.

c02f011.tif

Figure 2-11: A higher end turnstile is an turnstile, which is designed to provide a secure access control in the lobby of a busy building.

“Space Saver Drop Arm Turnstiles” by Ed Jacobsen - Own work. Licensed under CC BY-SA 4.0 via Commons - en.wikipedia.org/wiki/Optical_turnstile#/media/File:Space_Saver_Drop_Arm_Turnstiles.jpg

Types of Locks

Key locks are one of the basic safeguards in protecting buildings, personnel, and property and are generally used to secure doors and windows. According to UL Standard 437, door locks and locking cylinders must resist attack through the following testing procedures: the picking test, impression test (a lock is surreptitiously opened by making an impression of the key with a key blank of some malleable material—wax or plastic—which is inserted into the keyway and then filed to fit the lock), forcing test, and salt spray corrosion test for products intended for outdoor use. The door locks and locking cylinders are required by UL standards to resist picking and impression for ten minutes.23

  1. Rim Lock A rim lock, shown in Figure 2-12, is a lock or latch typically mounted on the surface of a door. It is typically associated with a dead bolt type of lock.
  2. Mortise Lock A mortise lock, shown in Figure 2-13, is a lock or latch that is recessed into the edge of a door rather than being mounted to its surface. This configuration has a handle and locking device all in one package.
    c02f012.tif

    Figure 2-12: A rim lock is a lock or latch typically mounted on the surface of a door.

    ©iStockphoto.com/Elena Elisseeva

    c02f013.tif

    Figure 2-13: A mortise lock is a lock or latch that is recessed into the edge of a door rather than being mounted to its surface.

    ©iStockphoto.com/ZoltanFabian

    1. Locking Cylinders The pin tumbler cylinder is a locking cylinder that is composed of circular pin tumblers that fit into matching circular holes on two internal parts of the lock (Figure 2-14). The pin tumbler functions on the principle that the pin tumblers need to be placed into a position that is entirely contained with the plug. Each pin is of a different height, thus accounting for the varying ridge sizes of the key. When the pins are properly aligned, the plug can be turned to unlock the bolt.
    c02f014.tif

    Figure 2-14: A pin tumbler cylinder is a locking cylinder that is composed of circular pin tumblers that fit into matching circular holes on two internal parts of the lock.

    PIN TUMBLER WITH KEY.PNG by Eric Pierce (Wapcaplet) https://commons.wikimedia.org/wiki/File:Pin_tumbler_with_key.png https://en.wikipedia.org/wiki/gnu_free_documentation_license

    1. Cipher Lock A cipher lock, shown in Figure 2-15, is controlled by a mechanical key pad, typically 5 to 10 digits. When it is pushed in the right combination, the lock will release and allow entry. The drawback is someone looking over a shoulder can see the combination. However, an electric version of the cipher lock is in production in which a display screen will automatically move the numbers around, so if someone is trying to watch the movement on the screen, they will not be able to identify the number indicated unless they are standing directly behind the victim.

    Remember locking devices are only as good as the wall or door that they are mounted in, and if the frame of the door or the door itself can be easily destroyed, then the lock will not be effective. A lock will eventually be defeated, and its primary purpose is to delay the attacker.

    c02f015.tif

    Figure 2-15: A cipher lock is controlled by a mechanical key pad with digits that when pushed in the right combination will release the lock and allow entry.

    ©iStockphoto.com/KellyThorson

    Hi-Tech Keys

    Not all lock and key systems are standard metal composite. There have been developments in key technology that offer convenient, reliable access control.

    “Intelligent keys” are keys with a built-in microprocessor, which is unique to the individual key holder and identifies the key holder specifically (Figure 2-16). The lock, also contains a minicomputer and the key exchange data, allowing the lock to make valid access decisions based on the parameters established for the key holder. For example, the key will know if the employee is allowed access into the facility after normal business hours; if not, the key will not work. Also, it will keep track of whose key is being used to access specific locked doors and when the attempts are taking place. When an employee resigns from the organization, the relevant key is disabled.

    “Instant keys” provide a quick way to disable a key by permitting one turn of the master key to change a lock. This method of changing a lock can save both time and money in the event a master key is lost. According to a manufacturer, a 50-story bank building can be rekeyed in six hours by two security guards. The system can go through 10 to 15 changes before having to be re-pinned.

    Safes

    Safes are often the last bastion of defense between an attacker and an asset. Several types of safes not only protect against theft but also fire and flood. A safe (Figure 2-17) is defined as a fireproof and burglarproof iron or steel chest used for the storage of currency, negotiable securities, and similar valuables.

    c02f016.tif

    Figure 2-16: “Intelligent keys” have a built-in microprocessor that is unique to the individual key holder and identifies the key holder specifically

    c02f017.tif

    Figure 2-17: A safe is a fireproof and burglarproof iron or steel chest used for the storage of currency, negotiable securities, and similar valuables.

    by Dave Jones, www.flickr.com/photos/eevblog/19270733900, licensed under Creative Commons 2.0 Generic creativecommons.org/licenses/by/2.0/, image cropped from original

    The categories for safes depend on the amount of security needed. Underwriters Laboratories lists several classifications of safe. The following is one such classification, provided here solely as an example:

    1. Tool-Resistant Safe Class TL-15. This type of combination lock safe is designed to be resistant to entry (by opening the door or making a six-inch hand hole through the door) for a networking time of 15 minutes using any combination of the following tools:
    • Mechanical or portable electric hand drills not exceeding one-half-inch size
    • Grinding points, carbide drills (excluding the magnetic drill press and other pressure-applying mechanisms, abrasive wheels, and rotating saws)
    • Common hand tools such as chisels, drifts, wrenches, screwdrivers, pliers, and hammers and sledges not to exceed the eight-pound size, pry bars and ripping tools not to exceed five feet in length
    • Picking tools that are not specially designed for use against a special make of safe
    1. A TL-15 safe must:
    • Weigh at least 750 pounds or be equipped with anchors and instructions for anchoring in larger safes, in concrete blocks, or to the floor of the bank premises.
    • Have metal in the body that is solid cast or fabricated open-hearth steel at least 1 inch thick with a tensile strength of 50,000 pounds per square inch (psi) and that is fastened to the floor in a manner equal to a continuous 1/4-inch penetration weld of open-hearth steel having an ultimate tensile strength of 50,000 psi.
    • Have the hole to permit insertion of electrical conductors for alarm devices not exceeding a 1/4-inch diameter and be provided in the top, side, bottom, or back of the safe body, but it must not permit a direct view of the door or locking mechanism.
    • Be equipped with a combination lock meeting UL Standard No. 768 requirements for Group 2, 1, or 1R locks.
    • Be equipped with a relocking device that will effectively lock the door if the combination lock is punched.

    The UL classifications mean that a Tool-Resistant Safe Class TL-30 will take 30 minutes to break into using tools. A TRTL-30 safe means it will take 30 minutes for a combination of tools and torches to break into the safe. The categories go up to a safe that can resist tools, torches, and explosives.

    Vaults

    A vault (Figure 2-18) is defined as a room or compartment designed for the storage and safekeeping of valuables and has a size and shape that permits entrance and movement within by one or more persons. Vaults generally are constructed to withstand the best efforts of man and nature to penetrate them.

    The UL has developed standards for vault doors and vault modular panels for use in the construction of vault floors, walls, and ceilings. The standards are intended to establish the burglary-resistant rating of vault doors and modular vault panels according to the length of time they withstand attack by common mechanical tools, electric tools, cutting torches, or any combination thereof. The ratings, based on the networking time to affect entry, are as follows:

    • Class M—One quarter hour
    • Class 1—One half hour
    • Class 2—One hour
    • Class 3—Two hours
    c02f018.tif

    Figure 2-18: A vault is a room or compartment designed for the storage and safe-keeping of valuables and has a size and shape that permits entrance and movement within by one or more persons.

    ©iStockphoto.com/kirstypargeter

    Containers

    A container is a reinforced filling cabinet that can be used to store proprietary and sensitive information. The standards for classified containers are typically from a government. For example, the U.S. government lists a class 6 container (Figure 2-19) as approved for the storage of secret, top secret, and confidential information. The container must meet the protection requirements for 30 man-minutes against covert entry and 20 hours against surreptitious entry with no forced entry.

    c02f019.tif

    Figure 2-19: A class 6 container is approved for the storage of secret, top secret, and confidential information.

Key Control

Key control, or more accurately the lack of key control, is one of the biggest risks that businesses and property owners face. Strong locks and stronger key control are the two essentials in a high security locking system. In most cases, master and sub-master keys are required for most building systems so that janitorial and other maintenance personnel may have access. Thus, the control of all keys becomes a critical element of the key lock system: All keys need to be tightly controlled from the day of purchase by designated personnel responsible for the lock system.

Without a key control system, an organization cannot be sure who has keys or how many keys may have been produced for a given property. Not having a patent-controlled key system leads to unauthorized key duplication, which can lead to unauthorized access or employee theft. Most key control systems utilize patented keys and cylinders. These lock cylinders employ very precise locking systems that can only be operated by the unique keys to that system. Because the cylinders and the keys are patented, the duplication of keys can only be done by factory-authorized professional locksmiths.

The key blanks and lock cylinders are made available only to those same factory authorized professional locksmiths. Procedures may be in place to allow the organization to contract another security professional should the need arise.

All high-security key control systems require specific permission to have keys originated or duplicated. These procedures assure the property owner or manager that they will always know who has keys and how many they possess. If an employee leaves and returns the keys, the organization can be reasonably assured that no copies of the keys were made. Most systems have cylinders that will retrofit existing hardware, keeping the cost of acquisition lower. Some systems employ different levels of security within the system, still giving patented control, but not requiring ultra-high security where it is not needed. These measures are again aimed at cost control.

Most systems can be master keyed; some will coordinate with existing master key systems. There are systems available that allow interchangeable core cylinders for retrofitting of existing systems.

Locks, keys, doors, and frame construction are interconnected and all must be equally effective. If any single link is weak, the system will break down. The Medeco Guide for Developing and Managing Key Control states:24

“The following represents the basic and most critical elements of key control and shall be included, as a minimum, in the key control specification.

  1. 2.1 Facility shall appoint a Key Control Authority or Key Control Manager to implement, execute, and enforce key control policies and procedures.
  2. 2.2 A policy and method for the issuing and collecting of all keys shall be implemented.
  3. 2.3 Keys and key blanks shall be stored in a locked cabinet or container, in a secured area.
  4. 2.4 A key control management program shall be utilized. A dedicated computer software application is preferred.
  5. 2.5 All keys shall remain the property of the issuing facility.
  6. 2.6 A key should be issued only to individuals who have a legitimate and official requirement for the key.
    1. 2.6.1 A requirement for access alone, when access can be accomplished by other means (such as unlocked doors, request for entry, intercoms, timers, etc.), shall not convey automatic entitlement to a key.
  7. 2.7 All keys shall be returned and accounted for.
  8. 2.8 Employees must ensure that keys are safeguarded and properly used.”

Securing Communications and Server Rooms

Communication rooms or closets must maintain a high level of security. Access must be controlled into this area, and only authorized personnel should be allowed to work on this equipment. No matter what transmission mode or media is selected, it is important that a method for securing communications be included. This includes physical protection, such as providing a rigid metallic conduit for all conductors, as well as technical protection, such as encrypting communication transmissions.

What Is Cable Plant Management?

Cable plant management is the design, documentation, and management of the lowest layer of the OSI network model-the physical layer. The physical layer is the foundation of any network, whether it is data, voice, video, or alarms, and it defines the physical media upon which signals or data is transmitted through the network.

Approximately 70% of your network is composed of passive devices such as cables, cross-connect blocks, and patch panels. Documenting these network components is critical to keeping a network finely tuned. The physical medium can be copper cable (e.g., cat 6), coaxial cable, optical fiber (e.g., single or multimode), wireless, or satellite. The physical layer defines the specifics of implementing a particular transmission medium. It defines the type of cable, frequency, terminations, etc. The physical layer is relatively static. Most change in the network occurs at the higher levels in the OSI model.

Key components of the cable plant include the entrance facility, equipment room, backbone cable, backbone pathway, telecommunication room, and horizontal distribution system.

Entrance Facility

The service entrance is the point at which the network service cables enter or leave a building. It includes the penetration through the building wall and continues to the entrance facility. The entrance facility can house both public and private network service cables. The entrance facility provides the means for terminating the backbone cable. The entrance facility generally includes electrical protection, ground, and demarcation point.

Equipment Room

The equipment room serves the entire building and contains the network interfaces, uninterruptible power supplies, computing equipment (e.g., servers, shared peripheral devices, and storage devices), and telecommunication equipment (e.g., PBX). It may be combined with the entrance facility.

Backbone Distribution System

A backbone distribution system provides connection between entrance facilities, equipment rooms, and telecommunication rooms. In a multi-floor building, the backbone distribution system is composed of the cabling and pathways between floors and between multiple telecommunication rooms. In a campus environment, the backbone distribution system is composed of the cabling and pathways between buildings.

Telecommunication Room

The telecommunication room (TR) typically serves the needs of a floor. The TR provides space for network equipment and cable terminations (e.g., cross-connect blocks and patch panels). It serves as the main cross-connect between the backbone cabling and the horizontal distribution system.

Horizontal Distribution System

The horizontal distribution system distributes the signals from the telecommunication room to the work areas. The horizontal distribution system consists of:

  • Cables
  • Cross-connecting blocks
  • Patch panels
  • Jumpers
  • Connecting hardware
  • Pathways (supporting structures such as cable trays, conduits, and hangers that support the cables from the telecommunication room to the work areas)

Protection from Lightning

A lightning strike to a grounding system produces an elevated ground or ground potential rise (GPR). Any equipment bonded to this grounding system and also connected to wire-line communications will most likely be damaged from outgoing currents seeking remote ground. Personnel working at this equipment are susceptible to harm because they will be in the current path of this outgoing current. The equipment damage from a lightning strike may not be immediate. Sometimes the equipment is weakened by stress and primed for failure at some future time. This is called latent damage and leads to premature mean time before failure (MTBF) of the equipment.

The best engineering design, for open-ended budgets, is the use of dielectric fiber optic cable for all communications. Obviously, a fiber optic cable is non-conductive, provided that it is an all dielectric cable with no metallic strength members or shield, making isolation no longer a requirement. This is because physical isolation is inherent in the fiber optic product itself. This dielectric fiber optic cable must be placed in a PVC conduit to protect it from rodents.

However, if budgets are tight, the engineering design solution to protect this equipment is to isolate the wire-line communications from remote ground. This is accomplished using optical isolators or isolation transformers. This equipment is housed together, mounted on a non-conducting surface in a non-conducting cabinet, and is called the high voltage interface (HVI).

The HVI isolates the equipment during a GPR and prevents any current flow from a higher potential grounding system to a lower potential grounding system. This totally protects any equipment from damage or associated working personnel from harm. No ground shunting device ever made, no matter how fast acting, will ever completely protect equipment from a GPR. Ground shunting devices are connected to the elevated ground and during a GPR offer an additional current path in the reverse direction from which they were intended to operate. Obviously, this flow of current, even away from the equipment, will immediately cause equipment damage and harm to working personnel.

Server Rooms

A server room needs a higher level of security than the rest of the facility. This should encompass a protected room with no windows and only one controlled entry into the area. Remember that once servers are compromised, the entire network is at risk. While some server attacks are merely annoying, others can cause serious damage. In order to protect the organization, it is paramount to protect your servers. Physical access to a system is almost a guaranteed compromise if performed by a motivated attacker.25 Therefore, server room security must be comprehensive and constantly under review.

Rack Security

It would be unusual for everyone in a room full of racks to have the need to access every rack; rack locks can ensure that only the correct people have access to servers and only telecommunications people have access to telecommunications gear. “Manageable” rack locks that can be remotely configured to allow access only when needed—to specific people at specific times—reduce the risk of an accident, sabotage, or unauthorized installation of additional equipment that could cause a potentially damaging rise in power consumption and rack temperature.

Restricted and Work Area Security

Depending on the configuration and operations structure of the data center, administrators and operators can be within the secured portion of the data center or can be in an auxiliary area. In most cases the latter is true, for the simple fact that there just isn’t enough room within the data center to maintain equipment and personnel. Additionally, server rooms are noisy and cold, not ideal conditions for human beings.

Individuals who maintain sensitive information must present the common sense attitude of being security minded within the confines of the facility. Not everyone who works on sensitive information needs to be inside a secured room. For areas not considered a high security area, there are still requirements to maintain a responsible profile. Store and maintain sensitive information in security containers, which can be a filing cabinet with locking bars and a padlock. Maintain a clean desk approach, which encourages personnel to lock up information when they are finished for the day.

Maintain strong password protection for workstations. Never have computer screens facing toward the window without blinds or some type of protective film. Privacy filters and screen protectors keep prying eyes off sensitive work. Have a shredding company destroy trash containing all proprietary and customer confidential information. This will eliminate outsiders from obtaining confidential information through dumpster diving.

In highly restricted work areas such as government SCIFs, there is a requirement to increase the security blanket to ensure tighter access to these areas. The physical security protection for a SCIF is intended to prevent as well as detect visual, acoustical, technical, and physical access by unauthorized persons. An organization may not be required to maintain government-classified information; however, the company’s livelihood and your employment is tied to proprietary information that requires the same level of security.

SCIF walls will consist of 3 layers of 5/8 inch drywall and will be from true floor to true ceiling. There will typically be only one SCIF entrance door, which will have an X-09 combination lock along with access control systems. According to the United States Director of Central Intelligence Directive 1/21 DCID1-21, “all SCIF perimeter doors must be plumbed in their frames and the frame firmly affixed to the surrounding wall. Door frames must be of sufficient strength to preclude distortion that could cause improper alignment of door alarm sensors, improper door closure, or degradation of audio security. All SCIF primary entrance doors must be equipped with an automatic door closer.”26

Basic HVAC requirements have any duct penetration into the secured area that is over 96 square inches include man bars to prevent a perpetrator from climbing through the ducts.

White noise or sound masking devices need to be placed over doors, in front of plenum, or pointed toward windows to keep an adversary from listening to classified conversations. Some SCIFs use music or noise that sounds like a constant flow of air to mask conversation. All access control must be managed from within the SCIF. Intrusion detection is sent out to a central station with the requirement that a response force will respond to the perimeter of the SCIF within 15 minutes.

Data Center Security

When discussing the need to secure the data center, security professionals immediately think of sabotage, espionage, or data theft. While the need is obvious for protection against intruders and the harm caused by intentional infiltration, the hazards from the ordinary activity of personnel working in the data center present a greater day-to-day risk for most facilities. For example, personnel within the organization need to be segregated from access areas where they have no “need to know” for that area. The security director would typically have physical access to most of the facility but has no reason to access financial or HR data. The head of computer operations might have access to computer rooms and operating systems but not the mechanical rooms that house power and HVAC facilities. It comes down to not allowing wandering within the organization.

As data centers and web hosting sites grow, the need for physical security at the facility is every bit as great as the need for cybersecurity of networks. The data center is the brains of the operation, and as such only specific people should be granted access. The standard scenario for increased security at a data center would consist of the basic security-in-depth: progressing from the outermost (least sensitive) areas to the innermost (most sensitive) areas. Security will start with entry into the building, which will require passing a receptionist or guard, then using a proximity card to gain building entry. For access into the computer room or data center, it will now require the same proximity card along with a PIN (Figure 2-20), plus a biometric device. Combining access control methods at an entry control point will increase the reliability of access for authorized personnel only. Using different methods for each access level significantly increases security at inner levels because each is secured by its own methods plus those of outer levels that must be entered first. This would also include internal door controls.

For a data center, the use of an internal mantrap or portal would provide increased entry and exit control. A portal (Figure 2-21) allows only one person in at a time and will only open the inner door once the outer door is closed. The portal can have additional biometrics within the device that must be activated before the secured side door opens.

c02f020.tif

Figure 2-20: A card reader with PIN and biometric features for additional security.

©iStockphoto.com/panumas nikomkai

c02f021.tif

Figure 2-21: A secure portal allows only one person in at a time and will only open the inner door once the outer door is closed.

©iStockphoto.com/Baloncici

The two-person rule is a strategy where two people must be in an area together, making it impossible for a person to be in the area alone. Two-man rule programming is optional with many access control systems. It prevents an individual cardholder from entering a selected empty security area unless accompanied by at least one other person. Use of the two-person rule can help eliminate insider threats to critical areas by requiring at least two individuals to be present at any time. It is also used for life safety within a security area; if one person has a medical emergency, there will be assistance present.

Utilities and HVAC Considerations

Beyond the human component, there are other important facets of securing data centers. These include power, HVAC, air purity, and water.

Utilities and Power

Because they often host mission-critical servers, data centers are built with both battery and generator backups. If the power cuts out, the batteries take over, just as they might in a home user’s uninterruptible power supply. The generators also begin and start producing power before the batteries fail. Areas that contain backup generators and power supplies need similar protection. This area can be controlled with key access or a card access reader, and electric door strikes can be installed for entry into this area. This area is also a person-specific area; there is no need to give everyone access to the generator room. This room will maintain backup power for the entire facility in the event of a power outage emergency. Two key aspects of power are the UPS and generator:

  1. Uninterruptible Power Supply (UPS) This is a battery backup system, which maintains a continuous supply of electric power to connected equipment by supplying power from a separate source when utility power is not available. A UPS has internal batteries to guarantee that continuous power is provided to the equipment even if the power source stops providing power. Of course, the UPS can only provide power for a while, typically a few minutes, but that is often enough to ride out power company glitches or short outages. Even if the outage is longer than the battery lifetime of the UPS, this provides the opportunity to execute an orderly shutdown of the equipment.
  2. Generator Generator power should be activated automatically in the event of a utility failure by the transfer switch. The data center load is maintained by the UPS units; however, often this is a short time as the generator should be active and up to speed within 10 seconds of a power failure. A generator (Figure 2-22) is typically run on diesel fuel and can be located outside of the facility or inside a parking garage. The generator room needs to be protected from unauthorized access either by access control devices or key-locked doors. The generator will operate as long as fuel is supplied. Some generators have a 300-gallon capacity and a facilities manager will have a contract with a local distributor to supply fuel. Most operation centers have more than one generator and test them once a month. If it is located outside, it needs protective barriers placed around it to protect it from a vehicle running into it.
HVAC

HVAC stands for heating, ventilation, and air-conditioning. Heat can cause extensive damage to computer equipment by causing processors to slow down and stop execution or even cause solder connections to loosen and fail. Excessive heat degrades network performance and causes downtime. Data centers and server rooms need an uninterrupted cooling system. Generally, there are two types of cooling: latent and sensible.

c02f022.tif

Figure 2-22: A backup generator is activated automatically in the event of a utility failure by the transfer switch.

Analogue Kid at English Wikipedia https://creativecommons.org/licenses/by/2.5/legalcode

Latent cooling is the ability of the air-conditioning system to remove moisture. This is important in typical comfort-cooling applications, such as office buildings, retail stores, and other facilities with high human occupancy and use. The focus of latent cooling is to maintain a comfortable balance of temperature and humidity for people working in and visiting such a facility. These facilities often have doors leading directly to the outside and a considerable amount of entrance and exit by occupants.

Sensible cooling is the ability of the air-conditioning system to remove heat that can be measured by a thermometer. Data centers generate much higher heat per square foot than typical comfort-cooling building environments, and they are typically not occupied by large numbers of people. In most cases, they have limited access and no direct means of egress to the outside of the building except for seldom used emergency exits.

Data centers have a minimal need for latent cooling and require minimal moisture removal. Sensible cooling systems are engineered with a focus on heat removal rather than moisture removal and have a higher sensible heat ratio; they are the most useful and appropriate choice for the data center. Cooling systems are dove tailed into the power supply overhead. If there is a power interruption, this will affect the cooling system. For the computers to continue operation, they need to be cooled. Portable air-conditioning units can be used as a backup in case of HVAC failure, but good design should ensure cooling systems are accounted for as backup devices.

Air Contamination

Over the past several years, there has been an increasing awareness dealing with anthrax and airborne attacks. Harmful agents introduced into the HVAC systems can rapidly spread throughout the structure and infect all persons exposed to the circulated air.

To avoid air contamination, place intakes at the highest practical level in the facility. For protection against malicious acts, the intakes should also be covered by screens so that objects cannot be tossed into the intakes or into air wells from the ground. Such screens should be sloped to allow thrown objects to roll or slide off the screen, away from the intake. Many existing buildings have air intakes that are located at or below ground level. For those that have wall-mounted or below-grade intakes close to the building, the intakes can be elevated by constructing a plenum or external shaft over the intake.

The following is a list of guidelines necessary to enhance security in this critical aspect of facility operations:

  • Restrict access to main air intake points to persons who have a work-related reason to be there.
  • Maintain access rosters of pre-approved maintenance personnel authorized to work on the system.
  • Escort all contractors with access to the system while on site.
  • Ensure that all air intake points are adequately secured with locking devices.

All buildings have air intake points that either are roof-mounted, exterior wall-mounted, or in a free-standing unit on the ground outside of the building. Due to “sick building syndrome,” where one person infects several with a cold or flu through a building HVAC system, many governments require all new buildings to mix a certain percentage of fresh air in with re-circulated air in the HVAC system. The volume of fresh air taken in is based on the square footage of the building and the number of employees working inside.

One method of reducing the risk of biological agents circulating throughout a building is installation of UV light filters in the HVAC system’s supply and return ducts. UV light inhibits the growth and reproduction of germs, bacteria, viruses, fungi, and mold. UV light is the portion of the electromagnetic spectrum that lies beyond the “purple” or visible edge of the spectrum. The sun acts as a natural outdoor air purification system, controlling airborne bacteria with UV rays. UV light penetrates the microorganism and breaks down molecular bonds causing cellular or genetic damage. The germs are either killed or sterilized, leaving them unable to reproduce. In either case, live bacterial counts can be significantly reduced and kept under control.

Water Issues

Along with excessive heat, water is a detriment to computer equipment. A data center may have a gas suppression fire system, but what about the floors above? Are they on a standard water sprinkler system and what would happen if the sprinklers are activated or begin leaking? Proper planning moves equipment away from water pipes that might burst, basements that might flood, or roofs that might leak. However, there are other water leaks that are more difficult to recognize and detect. Blocked ventilation systems can cause condensation if warm, moist air is not removed quickly. If vents are located above or behind machines, condensation can form small puddles that no one sees. Stand-alone air conditioners are especially vulnerable to water leaks if condensation is not properly removed. Even small amounts of water near air intakes will raise humidity levels and fill servers with moisture.

Fire Detection and Suppression

To protect the server room from fire, the organization needs to have smoke detectors installed and linked to a panel with enunciators that will warn people that there is smoke in the room. Also, it should be linked to a fire suppression system that can help put out the fire with no damage to equipment from the gas itself.

Fire Detection

A smoke detector is one of the most important devices to have due to its ability to warn of a pending fire, coupled with a good signaling device.

A detector in proper working condition will sound an alarm and give all occupants a chance to make it out alive. There are two main categories of smoke detectors: optical detection (photoelectric) and physical process (ionization). Photoelectric detectors are classified as either beam or refraction. Beam detectors operate on the principle of light and a receiver. Once enough smoke enters the room and breaks the beam of light, the alarm is sounded. The refraction type has a blocker between the light and the receiver. Once enough smoke enters the room, the light is deflected around the beam to the signal. Finally, we have the ionization type detector; these detectors monitor the air around the sensors constantly. Once there is enough smoke in the room, the alarm will sound.

There are three main types of fire detectors: flame detectors, smoke detectors, and heat detectors. There are two main types of flame detectors, and they are classified as infrared (IR) and ultraviolet (UV) detectors. IR detectors primarily detect a large mass of hot gases that emit a specific spectral pattern in the location of the detector; these patterns are sensed with a thermographic camera and an alarm is sounded. Additional hot surfaces in the room may trigger a false response with this alarm. UV flame detectors detect flames at speeds of 3—4 milliseconds due to the high-energy radiation emitted by fires and explosions at the instant of their ignition. Some of the false alarms of this system include random UV sources such as lightning, radiation, and solar radiation that may be present in the room.

There are heat detectors, which include fixed temperature or rate of rise detectors. The user will set a predetermined temperature level for the alarm to sound. If the room temperature rises to that setting, the alarm will sound. Rate of rise temperature will detect a sudden change of temperature around the sensor. Usually this setting is at around 10–15 degrees per minute. Nothing more is required of the consumer except routine checks for battery life and operation status. Heat detectors should not be used to replace smoke detectors; each component in fire safety serves its purpose and should be taken seriously. The combination of devices and the knowledge of procedures are the only way to achieve success during a possible fire.

Fire Suppression

All buildings should be equipped with an effective fire suppression system, providing the building with around the clock protection. Traditionally, fire suppression systems employed arrays of water sprinklers that would douse a fire and surrounding areas. Sprinkler systems are classified into four different groups: wet, dry, pre-action, and deluge.

  • Wet Systems—Have a constant supply of water in them at all times; once activated, these sprinklers will not shut off until the water source is shut off.
  • Dry Systems— Do not have water in them. The valve will not release until the electric valve is stimulated by excess heat.
  • Pre-Action Systems—Incorporate a detection system, which can eliminate concerns of water damage due to false activations. Water is held back until detectors in the area are activated.
  • Deluge Systems—Operate in the same function as the pre-action system except all sprinkler heads are in the open position.

Water may be a sound solution for large physical areas such as warehouses, but it is entirely inappropriate for computer equipment. A water spray can irreparably damage hardware more quickly than encroaching smoke or heat. Gas suppression systems operate to starve the fire of oxygen. In the past, Halon was the choice for gas suppression systems; however, Halon leaves residue, depletes the ozone layer, and can injure nearby personnel.27

There are several gas suppression systems that are recommended for fire suppression in a server room or anywhere electronic equipment is employed:

  • Aero-K—Uses an aerosol of microscopic potassium compounds in a carrier gas released from small canisters mounted on walls near the ceiling. The Aero-K generators are not pressurized until fire is detected. The Aero-K system uses multiple fire detectors and will not release until a fire is “confirmed” by two or more detectors (limiting accidental discharge). The gas is non-corrosive, so it does not damage metals or other materials. It does not harm electronic devices or media such as tape or discs. More important, Aero-K is nontoxic and does not injure personnel.
  • FM-200—Is a colorless, liquefied compressed gas. It is stored as a liquid and dispensed into the hazard as a colorless, electrically non-conductive vapor that is clear and does not obscure vision. It leaves no residue and has acceptable toxicity for use in occupied spaces at design concentration. FM-200 does not displace oxygen and, therefore, is safe for use in occupied spaces without fear of oxygen deprivation.

Summary

The SSCP must spend time to become comfortable with the intricacies of the organization’s security operations. There are many aspects from the need to understand confidentiality, integrity and availability, to non-repudiation and the separation of duties and defense-in-depth concepts. In addition, the SSCP is expected to have a firm understanding of the (ISC)2 code of ethics, and as a result, knowledge of the actions required to be compliant. Further, the need to be able to document and operate security controls effectively within the organization is also a key responsibility that the SSCP is expected to undertake. An information security practitioner must have a thorough understanding of fundamental risk, response, and recovery concepts. By understanding each of these concepts, the security practitioner will have the knowledge required to protect the security practitioner’s organization and professionally execute the security practitioner’s job responsibilities.

Sample Questions

  1. Security awareness training aims to educate users on:
    1. What they can do to maintain the organization’s security posture
    2. How to secure their home computer systems
    3. The work performed by the information security organization
    4. How attackers defeat security safeguards
  2. Which of the following are operational aspects of configuration management (CM)?
    1. Identification, documentation, control, and auditing
    2. Documentation, control, accounting, and auditing
    3. Control, accounting, auditing, and reporting
    4. Identification, control, accounting, and auditing
  3. The systems certification process can best be described as a:
    1. Process for obtaining stakeholder signoff on system configuration
    2. Method of validating adherence to security requirements
    3. Means of documenting adherence to security standards
    4. Method of testing a system to assure that vulnerabilities have been addressed
  4. Which of the following is a degausser used to do?
    1. Render media that contain sensitive data unusable.
    2. Overwrite sensitive data with zeros so that it is unreadable.
    3. Eliminate magnetic data remanence on a disk or tape.
    4. Reformat a disk or tape for subsequent reuse.
  5. A web application software vulnerability that allows an attacker to extract sensitive information from a backend database is known as a:
    1. Cross-site scripting vulnerability
    2. Malicious file execution vulnerability
    3. Injection flaw
    4. Input validation failure
  6. The security practice that restricts user access based on need to know is called:
    1. Mandatory access control
    2. Default deny configuration
    3. c. Role-based access control
    4. Least privilege
  7. A security guideline is a:
    1. Set of criteria that must be met to address security requirements
    2. Tool for measuring the effectiveness of security safeguards
    3. Statement of senior management expectations for managing the security program
    4. Recommended security practice
  8. A security baseline is a:
    1. Measurement of security effectiveness when a control is first implemented
    2. Recommended security practice
    3. Minimum set of security requirements for a system
    4. Measurement used to determine trends in security activity
  9. An antifraud measure that requires two people to complete a transaction is an example of the principle of:
    1. Separation of duties
    2. Dual control
    3. Role-based access control
    4. Defense in depths
  10. The waterfall model is a:
    1. Development method that follows a linear sequence of steps
    2. Iterative process used to develop secure applications
    3. Development method that uses rapid prototyping
    4. Extreme programming model used to develop Web application
  11. Code signing is a technique used to:
    1. Ensure that software is appropriately licensed for use
    2. Prevent source code tampering
    3. Identify source code modules in a release package
    4. Support verification of source code authenticity
  12. The role of information owner in the system security plan includes:
    1. Maintaining the system security plan
    2. Determining privileges that will be assigned to users of the system
    3. Assessing the effectiveness of security controls
    4. Authorizing the system for operation
  13. What are the mandatory tenets of the ISC2 Code of Ethics? (Choose all that apply.)
    1. Protect society, the commonwealth, and the infrastructure.
    2. Act honorably, honestly, justly, responsibly, and legally.
    3. Promote and preserve public trust and confidence in information and systems.
    4. Advance and protect the profession.
  14. What principle does confidentiality support?
    1. Due diligence
    2. Due care
    3. Least privilege
    4. Collusion
  15. What two things are used to accomplish non-repudiation?
    1. Proofing and provisioning
    2. Encryption and authorization
    3. Monitoring and private keys
    4. Digital signatures and public key infrastructure
  16. What are the elements that make up information security risks?
    1. Requirements, threats and exposures
    2. Threats, vulnerabilities and impacts
    3. Assessments, vulnerabilities and expenses
    4. Impacts, probabilities and known errors
  17. What is an example of a compensating control?
    1. A fence
    2. Termination
    3. Job rotation
    4. Warning banner
  18. What is remote attestation?
    1. A form of integrity protection that makes use of a hashed copy of hardware and software configuration to verify that configurations have not been altered.
    2. A form of confidentiality protection that makes use of a cached copy of hardware and software configuration to verify that configurations have not been altered.
    3. A form of integrity protection that makes use of a cached copy of hardware and software configuration to verify that configurations have not been altered.
    4. A form of confidentiality protection that makes use of a hashed copy of hardware and software configuration to verify that configurations have not been altered.
  19. With regards to the Change Control Policy document and Change Management, where should Analysis/Impact Assessment take place?
    1. After the decision making and prioritization activities, but before approval.
    2. After the recording of the proposed change(s), but before decision making and prioritization activities.
    3. After the approval, but before status tracking activities.
    4. After the request submission, but before recording of the proposed change(s).

Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.82.154