Chapter 8

Communications and Operations Security

Chapter Objectives

After reading this chapter and completing the exercises, you will be able to do the following:

  • Create useful and appropriate standard operating procedures.

  • Implement change control processes.

  • Understand the importance of patch management.

  • Protect information systems against malware.

  • Consider data backup and replication strategies.

  • Recognize the security requirements of email and email systems.

  • Appreciate the value of log data and analysis.

  • Evaluate service provider relationships.

  • Understand the importance of threat intelligence and information sharing.

  • Write policies and procedures to support operational and communications security.

Section 3.3 of the NIST Cybersecurity Framework, “Communicating Cybersecurity Requirements with Stakeholders,” provides guidance to organizations to learn how to communicate requirements among interdependent stakeholders responsible for the delivery of essential critical infrastructure services.

Section 12 of ISO 27002:2013, “Operations Security,” and Section 13 of ISO 27002:2013, “Communications Security,” focus on information technology (IT) and security functions, including standard operating procedures, change management, malware protection, data replication, secure messaging, and activity monitoring. These functions are primarily carried out by IT and information security data custodians, such as network administrators and security engineers. Many companies outsource some aspect of their operations. Section 15 of ISO 27002:2013, “Supplier Relationships,” focuses on service delivery and third-party security requirements.

The NICE Framework introduced in Chapter 6, “Human Resources Security,” is particularly appropriate for this domain. Data owners need to be educated on operational risk so they can make informed decisions. Data custodians should participate in training that focuses on operational security threats so that they understand the reason for implementing safeguards. Users should be surrounded by a security awareness program that fosters everyday best practices. Throughout the chapter, we cover policies, processes, and procedures recommended to create and maintain a secure operational environment.

FYI: NIST Cybersecurity Framework and ISO/IEC 27002:2013

As previously mentioned in this chapter, Section 3.3 of the NIST Cybersecurity Framework, “Communicating Cybersecurity Requirements with Stakeholders,” provides guidance to organizations to learn how to communicate requirements among interdependent stakeholders. Examples include how organizations could use a Target Profile to express cybersecurity risk management requirements to an external service provider. These external service providers could be a cloud provider, such as Amazon Web Services (AWS), Google Cloud, or Microsoft Azure; or it can be a cloud service, such as Box, Dropbox, or any other service.

In addition, the NIST Framework suggests that an organization may express its cybersecurity state through a Current Profile to report results or to compare with acquisition requirements. Also, a critical infrastructure owner or operator may use a Target Profile to convey required Categories and Subcategories.

A critical infrastructure sector may establish a Target Profile that can be used among its constituents as an initial baseline Profile to build their tailored Target Profiles.

Section 12 of ISO 27002:2013, “Operations Security,” focuses on data center operations, integrity of operations, vulnerability management, protection against data loss, and evidence-based logging. Section 13 of ISO 27002:2013, “Communications Security,” focuses on protection of information in transit. Section 15 of ISO 27002:2013, “Supplier Relationships,” focuses on service delivery and third-party security requirements.

Additional NIST guidance is provided in the following documents:

  • “NIST Cybersecurity Framework” (covered in detail in Chapter 16)

  • SP 800-14: “Generally Accepted Principles and Practices for Securing Information Technology Systems”

  • SP 800-53: “Recommended Security Controls for Federal Information Systems and Organizations”

  • SP 800-100: “Information Security Handbook: A Guide for Managers”

  • SP 800-40: “Creating a Patch and Vulnerability Management Program”

  • SP 800-83: “Guide to Malware Incident Prevention and Handling for Desktops and Laptops’

  • SP 800-45: “Guidelines on Electronic Mail Security”

  • SP 800-92: “Guide to Computer Security Log Management”

  • SP 800-42: “Guideline on Network Security Testing”

Standard Operating Procedures

Standard operating procedures (SOPs) are detailed explanations of how to perform a task. The objective of an SOP is to provide standardized direction, improve communication, reduce training time, and improve work consistency. An alternate name for SOPs is standard operating protocols. An effective SOP communicates who will perform the task, what materials are necessary, where the task will take place, when the task will be performed, and how the person will execute the task.

Why Document SOPs?

The very process of creating SOPs requires us to evaluate what is being done, why it is being done that way, and perhaps how we could do it better. SOPs should be written by individuals knowledgeable about the activity and the organization’s internal structure. Once written, the details in an SOP standardize the target process and provide sufficient information that someone with limited experience or knowledge of the procedure, but with a basic understanding, can successfully perform the procedure unsupervised. Well-written SOPs reduce organizational dependence on individual and institutional knowledge.

It is not uncommon for an employee to become so important that losing that individual would be a huge blow to the company. Imagine that this person is the only one performing a critical task; no one has been cross-trained, and no documentation exists as to how the employee performs this task. The employee suddenly becoming unavailable could seriously injure the organization. Having proper documentation of operating procedures is not a luxury: It is a business requirement.

SOPs should be authorized and protected accordingly, as illustrated in Figure 8-1 and described in the following sections.

A figure represents the process of authorizing and protecting SOPs.

FIGURE 8-1 Authorizing and Protecting SOPs

Authorizing SOP Documentation

After a procedure has been documented, it should be reviewed, verified, and authorized before being published. The reviewer analyzes the document for clarity and readability. The verifier tests the procedure to make sure it is correct and not missing any steps. The process owner is responsible for authorization, publication, and distribution. Post-publication changes to the procedures must be authorized by the process owner.

Protecting SOP Documentation

Access and version controls should be put in place to protect the integrity of the document from both unintentional error and malicious insiders. Imagine a case where a disgruntled employee gets hold of a business-critical procedure document and changes key information. If the tampering is not discovered, it could lead to a disastrous situation for the company. The same holds true for revisions. If multiple revisions of the same procedure exist, there is a good chance someone is going to be using the wrong version.

Developing SOPs

SOPs should be understandable to everyone who uses them. SOPs should be written in a concise, step-by-step, plain language format. If not well written, SOPs are of limited value. It is best to use short, direct sentences so that the reader can quickly understand and memorize the steps in the procedure. Information should be conveyed clearly and explicitly to remove any doubt as to what is required. The steps must be in logical order. Any exceptions must be noted and explained. Warnings must stand out.

The four common SOP formats are Simple Step, Hierarchical, Flowchart, and Graphic. As shown in Table 8-1, two factors determine what type of SOP to use: how many decisions the user will need to make and how many steps are in the procedure. Routine procedures that are short and require few decisions can be written using the simple step format. Long procedures consisting of more than 10 steps, with few decisions, should be written in a hierarchical format or in a graphic format. Procedures that require many decisions should be written in the form of a flowchart. It is important to choose the correct format. The best-written SOPs will fail if they cannot be followed.

TABLE 8-1 SOP Methods

Many Decisions?

More Than Ten Steps?

Recommended SOP Format

No

No

Simple Step

No

Yes

Hierarchical or Graphic

Yes

No

Flowchart

Yes

Yes

Flowchart

As illustrated in Table 8-2, the simple step format uses sequential steps. Generally, these rote procedures do not require any decision making and do not have any substeps. The simple step format should be limited to 10 steps.

TABLE 8-2 Simple Step Format

Procedure

Completed

Note: These procedures are to be completed by the night operator by 6:00 a.m., Monday–Friday. Please initial each completed step.

  1. Remove backup tape from tape drive.

  2. Label with the date.

  3. Place tape in tape case and lock.

  4. Call ABC delivery at 888-555-1212.

  5. Tell ABC that the delivery is ready to be picked up.

  6. When ABC arrives, require driver to present identification.

  7. Note in pickup log the driver’s name.

  8. Have the driver sign and date the log.

As illustrated in the New User Account Creation Procedure example, shown in Table 8-3, the hierarchical format is used for tasks that require more detail or exactness. The hierarchical format allows the use of easy-to-read steps for experienced users while including substeps that are more detailed as well. Experienced users may refer to the substeps only when they need to, whereas beginners will use the detailed substeps to help them learn the procedure.

TABLE 8-3 Hierarchical Format

New User Account Creation Procedure

Note: You must have the HR New User Authorization Form before starting this process.

Procedures

Detail

Launch Active Directory Users and Computers (ADUC).

  1. Click on the TS icon located on the administrative desktop.

  2. Provide your login credentials.

  3. Click the ADUC icon.

Create a new user.

  1. Right-click the Users OU folder.

  2. Choose New User.

Enter the required user information.

  1. Enter user first, last, and full name.

  2. Enter user login name and click Next.

  3. Enter user’s temporary password.

  4. Choose User Must Change Password at Next Login and click Next.

Create an Exchange mailbox.

  1. Make sure Create an Exchange Mailbox is checked.

  2. Accept the defaults and click Next.

Verify account information.

  1. Confirm that all information on the summary screen is correct.

  2. Choose Finish.

Complete demographic profile.

  1. Double-click the username.

  2. Complete the information on the General, Address, Telephone, and Organization tabs. (Note: Info should be on the HR request sheet.)

Add users to groups.

  1. Choose the Member Of tab.

  2. Add groups as listed on the HR request sheet.

  3. Click OK when completed.

Set remote control permissions.

  1. Click the Remote Control tab.

  2. Make sure the Enable Remote Control and Require User’s Permission boxes are checked.

  3. Level of control should be set to Interact with the Session.

Advise HR regarding account creation.

  1. Sign and date the HR request form.

  2. Send it to HR via interoffice mail.

Pictures truly are worth a thousand words. The graphic format, shown in Figure 8-2, can use photographs, icons, illustrations, or screenshots to illustrate the procedure. This format is often used for configuration tasks, especially if various literacy levels or language barriers are involved.

A diagrammatic representation of the steps followed in the decision-making process.

FIGURE 8-2 Example of the Graphic Format

A flowchart, shown in Figure 8-3, is a diagrammatic representation of steps in a decision-making process. A flowchart provides an easy-to-follow mechanism for walking a worker through a series of logical decisions and the steps that should be taken as a result. When developing flowcharts, you should use the generally accepted flowchart symbols. ISO 5807:1985 defines symbols to be used in flowcharts and gives guidance for their use.

A flow diagram depicts the installation procedure of ABC software.

FIGURE 8-3 Flowchart Format

FYI: A Recommended Writing Resource

There are several resources for learning how to write procedures that, even if they are not related to cybersecurity, could be very beneficial to get started. An example is the North Carolina State University’s Produce Safety SOP template at: https://ncfreshproducesafety.ces.ncsu.edu/wp-content/uploads/2014/03/how-to-write-an-SOP.pdf.

Another example is the Cornell University “Developing Effective Standard Operating Procedures” by David Grusenmeyer.

In Practice

Standard Operating Procedures Documentation Policy

Synopsis: Standard operating procedures (SOPs) are required to ensure the consistent and secure operation of information systems.

Policy Statement:

  • SOPs for all critical information processing activities will be documented.

  • Information system custodians are responsible for developing and testing the procedures.

  • Information system owners are responsible for authorization and ongoing review.

  • The Office of Information Technology is responsible for the publication and distribution of information systems-related SOPs.

  • SOPs for all critical information security activities will be documented, tested, and maintained.

  • Information security custodians are responsible for developing and testing the procedures.

  • The Office of Information Security is responsible for authorization, publication, distribution, and review of information security–related SOPs.

  • Internal auditors will inspect actual practice against the requirements of the SOPs. Each auditor or an audit team creates audit checklists of the items to be covered. Corrective actions and suggestions for remediation may be raised following an internal or regulatory audit where discrepancies have been observed.

Operational Change Control

Operational change is inevitable. Change control is an internal procedure by which authorized changes are made to software, hardware, network access privileges, or business processes. The information security objective of change control is to ensure the stability of the network while maintaining the required levels of confidentiality, integrity, and availability (CIA). A change management process establishes an orderly and effective mechanism for submission, evaluation, approval, prioritization, scheduling, communication, implementation, monitoring, and organizational acceptance of change.

Why Manage Change?

The process of making changes to systems in production environments presents risks to ongoing operations and data that are effectively mitigated by consistent and careful management. Consider this scenario: Windows 8 is installed on a mission-critical workstation. The system administrator installs a service pack. A service pack often will make changes to system files. Now imagine that for a reason beyond the installer’s control, the process fails halfway through. What is the result? An operating system that is neither the original version, nor the updated version. In other words, there could be a mix of new and old system files, which would result in an unstable platform. The negative impact on the process that depends on the workstation would be significant. Take this example to the next level and imagine the impact if this machine were a network server used by all employees all day long. Consider the impact on the productivity of the entire company if this machine were to become unstable because of a failed update. What if the failed change impacted a customer-facing device? The entire business could come grinding to a halt. What if the failed change also introduced a new vulnerability? The result could be loss of confidentiality, integrity, and/or availability (CIA).

Change needs to be controlled. Organizations that take the time to assess and plan for change spend considerably less time in crisis mode. Typical change requests are a result of software or hardware defects or bugs that must be fixed, system enhancement requests, and changes in the underlying architecture such as a new operating system, virtualization hypervisor, or cloud provider.

The change control process starts with an RFC (Request for Change). The RFC is submitted to a decision-making body (generally senior management). The change is then evaluated and, if approved, implemented. Each step must be documented. Not all changes should be subject to this process. In fact, doing so would negate the desired effect and in the end significantly impact operations. There should be an organization policy that clearly delineates the type(s) of change that the change control process applies to. Additionally, there needs to be a mechanism to implement “emergency” changes. Figure 8-4 illustrates the RFC process and divides it into three primary milestones or phases: evaluate, approve, and verify.

An RFC process diagram is depicted.

FIGURE 8-4 Flowchart Format

Submitting an RFC

The first phase of the change control process is an RFC submission. The request should include the following items:

  • Requestor name and contact information

  • Description of the proposed change

  • Justification of why the proposed changes should be implemented

  • Impact of not implementing the proposed change

  • Alternatives to implementing the proposed change

  • Cost

  • Resource requirements

  • Time frame

Figure 8-5 shows a template of the aforementioned RFC.

A figure represents the RFC template.

FIGURE 8-5 RFC Template

Taking into consideration the preceding information as well as organizational resources, budget, and priorities, the decision makers can choose to continue to evaluate, approve, reject, or defer until a later date.

Developing a Change Control Plan

After a change is approved, the next step is for the requestor to develop a change control plan. The complexity of the change as well as the risk to the organization will influence the level of detail required. Standard components of a change control plan include a security review to ensure that new vulnerabilities are not being introduced.

Communicating Change

The need to communicate to all relevant parties that a change will be taking place cannot be overemphasized. Different research studies have found that communicating the reason for change was identified as the number-one most important message to share with employees and the second most important message for managers and executives (with the number-one message being about their role and expectations). The messages to communicate to impacted employees fell into two categories: messages about the change and how the change impacts them.

Messages about the change include the following:

  • The current situation and the rationale for the change

  • A vision of the organization after the change takes place

  • The basics of what is changing, how it will change, and when it will change

  • The expectation that change will happen and is not a choice

  • Status updates on the implementation of the change, including success stories

Messages about how the change will affect the employee include the following:

  • The impact of the change on the day-to-day activities of the employee

  • Implications of the change on job security

  • Specific behaviors and activities expected from the employee, including support of the change

  • Procedures for getting help and assistance during the change

Projects that fail to communicate are doomed to fail.

Implementing and Monitoring Change

After the change is approved, planned, and communicated, it is time to implement. Change can be unpredictable. If possible, the change should first be applied to a test environment and monitored for impact. Even minor changes can cause havoc. For example, a simple change in a shared database’s filename could cause all applications that use it to fail. For most environments, the primary implementation objective is to minimize stakeholder impact. This includes having a plan to roll back or recover from a failed implementation.

Throughout the implementation process, all actions should be documented. This includes actions taken before, during, and after the changes have been applied. Changes should not be “set and forget.” Even a change that appears to have been flawlessly implemented should be monitored for unexpected impact.

Some emergency situations require organizations to bypass certain change controls to recover from an outage, incident, or unplanned event. Especially in these cases, it is important to document the change thoroughly, communicate the change as soon as possible, and have it approved post implementation.

In Practice

Operational Change Control Policy

Synopsis: To minimize harm and maximize success associated with making changes to information systems or processes.

Policy Statement:

  • The Office of Information Technology is responsible for maintaining a documented change control process that provides an orderly method in which changes to the information systems and processes are requested and approved prior to their installation and/or implementation. Changes to information systems include but are not limited to:

    • Vendor-released operating system, software application, and firmware patches, updates, and upgrades

    • Updates and changes to internally developed software applications

    • Hardware component repair/replacement

  • Implementations of security patches are exempt from this process as long as they follow the approved patch management process.

  • The change control process must take into consideration the criticality of the system and the risk associated with the change.

  • Changes to information systems and processes considered critical to the operation of the company must be subject to preproduction testing.

  • Changes to information systems and processes considered critical to the operation of the company must have an approved rollback and/or recovery plan.

  • Changes to information systems and processes considered critical to the operation of the company must be approved by the Change Management Committee. Other changes may be approved by the Director of Information Systems, Chief Technology Officer (CTO), or Chief Information Officer (CIO).

  • Changes must be communicated to all impacted stakeholders.

  • In an emergency scenario, changes may be made immediately (business system interruption, failed server, and so on) to the production environment. These changes will be verbally approved by a manager supervising the affected area at the time of change. After the changes are implemented, the change must be documented in writing and submitted to the CTO.

Why Is Patching Handled Differently?

A patch is software or code designed to fix a problem. Applying security patches is the primary method of fixing security vulnerabilities in software. The vulnerabilities are often identified by researchers or ethical hackers who then notify the software company so that they can develop and distribute a patch. A function of change management, patching is distinct in how often and how quickly patches need to be applied. The moment a patch is released, attackers make a concerted effort to reverse engineer the patch swiftly (measured in days or even hours), identify the vulnerability, and develop and release exploit code. The time immediately after the release of a patch is ironically a particularly vulnerable moment for most organizations because of the time lag in obtaining, testing, and deploying a patch.

FYI: Patch Tuesday and Exploit Wednesday

Microsoft releases new security updates and their accompanying bulletins on the second Tuesday of every month at approximately 10 a.m. Pacific Time, hence the name Patch Tuesday. The following day is referred to as Exploit Wednesday, signifying the start of exploits appearing in the wild. Many security researchers and threat actors reverse engineer the fixes (patches) to create exploits, in some cases within hours after disclosure.

Cisco also releases bundles of Cisco IOS and IOS XE Software Security Advisories at 1600 GMT on the fourth Wednesday in March and September each year. Additional information can be found on Cisco’s Security Vulnerability Policy at: https://www.cisco.com/c/en/us/about/security-center/security-vulnerability-policy.html.

Understanding Patch Management

Timely patching of security issues is generally recognized as critical to maintaining the operational CIA of information systems. Patch management is the process of scheduling, testing, approving, and applying security patches. Vendors who maintain information systems within a company network should be required to adhere to the organizational patch management process.

The patching process can be unpredictable and disruptive. Users should be notified of potential downtime due to patch installation. Whenever possible, patches should be tested prior to enterprise deployment. However, there may be situations where it is prudent to waive testing based on the severity and applicability of the identified vulnerability. If a critical patch cannot be applied in a timely manner, senior management should be notified of the risk to the organization.

Today’s cybersecurity environment and patching dependencies call for substantial improvements in the area of vulnerability coordination. Open source software vulnerabilities like Heartbleed, protocol vulnerabilities like the WPA KRACK attacks, and others highlight coordination challenges among software and hardware providers.

The Industry Consortium for Advancement of Security on the Internet (ICASI) proposed to the FIRST Board of Directors that a Special Interest Group (SIG) be considered on vulnerability disclosure to review and update vulnerability coordination guidelines. Later, the National Telecommunications and Information Association (NTIA) convened a multistakeholder process to investigate cybersecurity vulnerabilities. The NTIA multiparty effort joined the similar effort underway within the FIRST Vulnerability Coordination SIG. Stakeholders created a document that derives multiparty disclosure guidelines and practices from common coordination scenarios and variations. This document can be found at https://first.org/global/sigs/vulnerability-coordination/multiparty/guidelines-v1.0.

Figure 8-6 shows the FIRST Vulnerability Coordination stakeholder roles and communication paths.

A figure shows the roles and the communication path of first vulnerability coordination stakeholders.

FIGURE 8-6 FIRST Vulnerability Coordination Stakeholder Roles and Communication Paths

The definitions of the different stakeholders used in the FIRST “Guidelines and Practices for Multi-Party Vulnerability Coordination and Disclosure” document are based on the definitions available in ISO/IEC 29147:2014 and used with minimal modification.

NIST Special Publication 800-40 Revision 3, Guide to Enterprise Patch Management Technologies, published July 2013, is designed to assist organizations in understanding the basics of enterprise patch management technologies. It explains the importance of patch management and examines the challenges inherent in performing patch management. The publication also provides an overview of enterprise patch management technologies and discusses metrics for measuring the technologies’ effectiveness and for comparing the relative importance of patches.

In Practice

Security Patch Management Policy

Synopsis: The timely deployment of security patches will reduce or eliminate the potential for exploitation.

Policy Statement:

  • Implementations of security patches are exempt from the organizational change management process as long as they follow the approved patch management process.

  • The Office of Information Security is responsible for maintaining a documented patch management process.

  • The Office of Information Technology is responsible for the deployment of all operating system, application, and device security patches.

  • Security patches will be reviewed and deployed according to applicability of the security vulnerability and/or identified risk associated with the patch or hotfix.

  • Security patches will be tested prior to deployment in the production environment. The CIO and the CTO have authority to waive testing based on the severity and applicability of the identified vulnerability.

  • Vendors who maintain company systems are required to adhere to the company patch management process.

  • If a security patch cannot be successfully applied, the COO must be notified. Notification must detail the risk to the organization.

Malware Protection

Malware, short for “malicious software,” is software (or script or code) designed to disrupt computer operation, gather sensitive information, or gain unauthorized access to computer systems and mobile devices. Malware is operating-system agnostic. Malware can infect systems by being bundled with other programs or self-replicating; however, the vast majority of malware requires user interaction, such as clicking an email attachment or downloading a file from the Internet. It is critical that security awareness programs articulate individual responsibility in fighting malware.

Malware has become the tool of choice for cybercriminals, hackers, and hacktivists. It has become easy for attackers to create their own malware by acquiring malware toolkits, such as Zeus, Shadow Brokers leaked exploits, and many more, and then customizing the malware produced by those toolkits to meet their individual needs. Examples are ransomware such as WannaCry, Nyetya, Bad Rabbit, and many others. Many of these toolkits are available for purchase, whereas others are open source, and most have user-friendly interfaces that make it simple for unskilled attackers to create customized, high-capability malware. Unlike most malware several years ago, which tended to be easy to notice, much of today’s malware is specifically designed to quietly and slowly spread to other hosts, gathering information over extended periods of time and eventually leading to exfiltration of sensitive data and other negative impacts. The term advanced persistent threats (APTs) is generally used to refer to this approach.

NIST Special Publication 800-83, Revision 1, Guide to Malware Incident Prevention and Handling for Desktops and Laptops, published in July 2012, provides recommendations for improving an organization’s malware incident prevention measures. It also gives extensive recommendations for enhancing an organization’s existing incident response capability so that it is better prepared to handle malware incidents, particularly widespread ones.

Are There Different Types of Malware?

Malware categorization is based on infection and propagation characteristics. The categories of malware include viruses, worms, Trojans, bots, ransomware, rootkits, and spyware/adware. Hybrid malware is code that combines characteristics of multiple categories—for example, combining a virus’s ability to alter program code with a worm’s ability to reside in live memory and to propagate without any action on the part of the user.

A virus is malicious code that attaches to and becomes part of another program. Generally, viruses are destructive. Almost all viruses attach themselves to executable files. They then execute in tandem with the host file. Viruses spread when the software or document they are attached to is transferred from one computer to another using the network, a disk, file sharing, or infected email attachments.

A worm is a piece of malicious code that can spread from one computer to another without requiring a host file to infect. Worms are specifically designed to exploit known vulnerabilities, and they spread by taking advantage of network and Internet connections. An early example of a worm was W32/SQL Slammer (aka Slammer and Sapphire), which was one of the fastest spreading worms in history. It infected the process space of Microsoft SQL Server 2000 and Microsoft SQL Desktop Engine (MSDE) by exploiting an unpatched buffer overflow. Once running, the worm tried to send itself to as many other Internet-accessible SQL hosts as possible. Microsoft had released a patch six months prior to the Slammer outbreak. Another example of a “wormable” malware was the WannaCry ransomware, which is discussed later in this chapter.

A Trojan is malicious code that masquerades as a legitimate benign application. For example, when a user downloads a game, he may get more than he expected. The game may serve as a conduit for a malicious utility such as a keylogger or screen scraper. A keylogger is designed to capture and log keystrokes, mouse movements, Internet activity, and processes in memory such as print jobs. A screen scraper makes copies of what you see on your screen. A typical activity attributed to Trojans is to open connections to a command and control server (known as a C&C). Once the connection is made, the machine is said to be “owned.” The attacker takes control of the infected machine. In fact, cybercriminals will tell you that after they have successfully installed a Trojan on a target machine, they actually have more control over that machine than the very person seated in front of and interacting with it. Once “owned,” access to the infected device may be sold to other criminals. Trojans do not reproduce by infecting other files, nor do they self-replicate. Trojans must spread through user interaction, such as opening an email attachment or downloading and running a file from the Internet. Examples of Trojans include Zeus and SpyEye. Both Trojans are designed to capture financial website login credentials and other personal information.

Bots (also known as robots) are snippets of code designed to automate tasks and respond to instructions. Bots can self-replicate (like worms) or replicate via user action (like Trojans). A malicious bot is installed in a system without the user’s permission or knowledge. The bot connects back to a central server or command center. An entire network of compromised devices is known as a botnet. One of the most common uses of a botnet is to launch distributed denial of service (DDoS) attacks. An example of a botnet that caused major outages in the past is the Mirai botnet, which is often referred to as the IoT Botnet. Threat actors were able to successfully compromise IoT devices, including security cameras and consumer routing devices, to create one of the most devastating botnets in history, launching numerous DDoS attacks against very high-profile targets.

Ransomware is a type of malware that takes a computer or its data hostage in an effort to extort money from victims. There are two types of ransomware: Lockscreen ransomware displays a full-screen image or web page that prevents you from accessing anything in your computer. Encryption ransomware encrypts your files with a password, preventing you from opening them. The most common ransomware scheme is a notification that authorities have detected illegal activity on your computer and you must pay a “fine” to avoid prosecution and regain access to your system. Examples of popular ransomware include WannaCry, Nyetya, Bad Rabbit, and others. Ransomware typically spreads or is delivered by malicious emails, malvertising (malicious advertisements or ads), and other drive-by downloads. However, in the case of WannaCry, this ransomware was the first one that spread in similar ways as worms (as previously defined in this chapter). Specifically, it used the EternalBlue exploit.

EternalBlue is an SMB exploit affecting various Windows operating systems from XP to Windows 7 and various flavors of Windows Server 2003 & 2008. The exploit technique is known as HeapSpraying and is used to inject shellcode into vulnerable systems, allowing for the exploitation of the system. The code is capable of targeting vulnerable machines by IP address and attempting exploitation via SMB port 445. The EternalBlue code is closely tied with the DoublePulsar backdoor and even checks for the existence of the malware during the installation routine.

Cisco Talos has created numerous articles covering in-depth technical details of numerous types of ransomware at http://blog.talosintelligence.com/search/label/ransomware.

A rootkit is a set of software tools that hides its presence in the lower layers of the operating system’s application layer, the operating system kernel, or in the device basic input/output system (BIOS) with privileged access permissions. Root is a UNIX/Linux term that denotes administrator-level or privileged access permissions. The word “kit” denotes a program that allows someone to obtain root/admin-level access to the computer by executing the programs in the kit—all of which is done without end-user consent or knowledge. The intent is generally remote C&C. Rootkits cannot self-propagate or replicate; they must be installed on a device. Because of where they operate, they are very difficult to detect and even more difficult to remove.

Spyware is a general term used to describe software that without a user’s consent and/or knowledge tracks Internet activity, such as searches and web surfing, collects data on personal habits, and displays advertisements. Spyware sometimes affects the device configuration by changing the default browser, changing the browser home page, or installing “add-on” components. It is not unusual for an application or online service license agreement to contain a clause that allows for the installation of spyware.

A logic bomb is a type of malicious code that is injected into a legitimate application. An attacker can program a logic bomb to delete itself from the disk after it performs the malicious tasks on the system. Examples of these malicious tasks include deleting or corrupting files or databases and executing a specific instruction after certain system conditions are met.

A downloader is a piece of malware that downloads and installs other malicious content from the Internet to perform additional exploitation on an affected system.

A spammer is a piece of malware that sends spam, or unsolicited messages sent via email, instant messaging, newsgroups, or any other kind of computer or mobile device communications. Spammers send these unsolicited messages with the primary goal of fooling users into clicking malicious links, replying to emails or other messages with sensitive information, or performing different types of scams. The attacker’s main objective is to make money.

How Is Malware Controlled?

The IT department is generally tasked with the responsibility of employing a strong antimalware defense-in-depth strategy. In this case, defense-in-depth means implementing prevention, detection, and response controls, coupled with a security awareness campaign.

Using Prevention Controls

The goal of prevention control is to stop an attack before it even has a chance to start. This can be done in a number of ways:

  • Impact the distribution channel by training users not to clink links embedded in email, open unexpected email attachments, irresponsibly surf the Web, download games or music, participate in peer-to-peer (P2P) networks, and allow remote access to their desktop.

  • Configure the firewall to restrict access.

  • Do not allow users to install software on company-provided devices.

  • Do not allow users to make changes to configuration settings.

  • Do not allow users to have administrative rights to their workstations. Malware runs in the security context of the logged-in user.

  • Do not allow users to disable (even temporarily) anti-malware software and controls.

  • Disable remote desktop connections.

  • Apply operating system and application security patches expediently.

  • Enable browser-based controls, including pop-up blocking, download screening, and automatic updates.

  • Implement an enterprise-wide antivirus/antimalware application. It is important that the antimalware solutions be configured to update as frequently as possible because many new pieces of malicious code are released daily.

You should also take advantage of sandbox-based solutions to provide a controlled set of resources for guest programs to run in. In a sandbox network, access is typically denied to avoid network-based infections.

Using Detection Controls

Detection controls should identify the presence of malware, alert the user (or network administrator), and in the best-case scenario stop the malware from carrying out its mission. Detection should occur at multiple levels—at the entry point of the network, on all hosts and devices, and at the file level. Detection controls include the following:

  • Real-time firewall detection of suspicious file downloads.

  • Real-time firewall detection of suspicious network connections.

  • Host and network-based intrusion detection systems (IDS) or intrusion prevention systems (IPS).

  • Review and analysis of firewalls, IDS, operating systems, and application logs for indicators of compromise.

  • User awareness to recognize and report suspicious activity.

  • Antimalware and antivirus logs.

  • Help desk (or equivalent) training to respond to malware incidents.

What Is Antivirus Software?

Antivirus (AV) software is used to detect, contain, and in some cases eliminate malicious software. Most AV software employs two techniques—signature-based recognition and behavior-based (heuristic) recognition. A common misconception is that AV software is 100% effective against malware intrusions. Unfortunately, that is not the case. Although AV applications are an essential control, they are increasingly limited in their effectiveness. This is due to three factors—the sheer volume of new malware, the phenomenon of “single-instance” malware, and the sophistication of blended threats.

The core of AV software is known as the “engine.” It is the basic program. The program relies on virus definition files (known as DAT files) to identify malware. The definition files must be continually updated by the software publisher and then distributed to every user. This was a reasonable task when the number and types of malware were limited. New versions of malware are increasing exponentially, thus making research, publication, and timely distribution a next-to-impossible task. Complicating this problem is the phenomenon of single-instance malware—that is, variants only used one time. The challenge here is that DAT files are developed using historical knowledge, and it is impossible to develop a corresponding DAT file for a single instance that has never been seen before. The third challenge is the sophistication of malware—specifically, blended threats. A blended threat occurs when multiple variants of malware (worms, viruses, bots, and so on) are used in concert to exploit system vulnerabilities. Blended threats are specifically designed to circumvent AV and behavioral-based defenses.

Numerous antivirus and antimalware solutions on the market are designed to detect, analyze, and protect against both known and emerging endpoint threats. The following are the most common types of antivirus and antimalware software:

  • ZoneAlarm PRO Antivirus+, ZoneAlarm PRO Firewall, and ZoneAlarm Extreme Security

  • F-Secure Anti-Virus

  • Kaspersky Anti-Virus

  • McAfee AntiVirus

  • Panda Antivirus

  • Sophos Antivirus

  • Norton AntiVirus

  • ClamAV

  • Immunet AntiVirus

There are numerous other antivirus software companies and products.

ClamAV is an open source antivirus engine sponsored and maintained by Cisco and non-Cisco engineers. You can download ClamAV from www.clamav.net. Immunet is a free community-based antivirus software maintained by Cisco Sourcefire. You can download Immunet from www.immunet.com

Personal firewalls and host intrusion prevention systems (HIPSs) are software applications that you can install on end-user machines or servers to protect them from external security threats and intrusions. The term personal firewall typically applies to basic software that can control Layer 3 and Layer 4 access to client machines. HIPS provide several features that offer more robust security than a traditional personal firewall, such as host intrusion prevention and protection against spyware, viruses, worms, Trojans, and other types of malware.

FYI: What Are the OSI and TCP/IP Models?

Two main models are currently used to explain the operation of an IP-based network. These are the TCP/IP model and the Open System Interconnection (OSI) model. The TCP/IP model is the foundation for most modern communication networks. Every day, each of us uses some application based on the TCP/IP model to communicate. Think, for example, about a task we consider simple: browsing a web page. That simple action would not be possible without the TCP/IP model.

The TCP/IP model’s name includes the two main protocols we discuss in the course of this chapter: Transmission Control Protocol (TCP) and Internet Protocol (IP). However, the model goes beyond these two protocols and defines a layered approach that can map nearly any protocol used in today’s communication.

In its original definition, the TCP/IP model included four layers, where each of the layers would provide transmission and other services for the level above it. These are the link layer, internet layer, transport layer, and application layer.

In its most modern definition, the link layer is split into two additional layers to clearly demark the physical and data link type of services and protocols included in this layer. The internet layer is also sometimes called the networking layer, which is based on another well-known model, the Open System Interconnection (OSI) model.

The OSI reference model is another model that uses abstraction layers to represent the operation of communication systems. The idea behind the design of the OSI model is to be comprehensive enough to take into account advancement in network communications and to be general enough to allow several existing models for communication systems to transition to the OSI model.

The OSI model presents several similarities with the TCP/IP model described above. One of the most important similarities is the use of abstraction layers. As with TCP/IP, each layer provides service for the layer above it within the same computing device while it interacts at the same layer with other computing devices. The OSI model includes seven abstract layers, each representing a different function and service within a communication network:

  • Physical layerLayer 1 (L1): Provides services for the transmission of bits over the data link.

  • Data link layerLayer 2 (L2): Includes protocols and functions to transmit information over a link between two connected devices. For example, it provides flow control and L1 error detection.

  • Network layerLayer 3 (L3): This layer includes the function necessary to transmit information across a network and provides abstraction on the underlying means of connection. It defines L3 addressing, routing, and packet forwarding.

  • Transport layerLayer 4 (L4): This layer includes services for end-to-end connection establishment and information delivery. For example, it includes error detection, retransmission capabilities, and multiplexing.

  • Session layerLayer 5 (L5): This layer provides services to the presentation layer to establish a session and exchange presentation layer data.

  • Presentation layerLayer 6 (L6): This layer provides services to the application layer to deal with specific syntax, which is how data is presented to the end user.

  • Application layerLayer 7 (L7): This is the last (or first) layer of the OSI model (depending on how you see it). It includes all the services of a user application, including the interaction with the end user.

Figure 8-7 illustrates how each layer of the OSI model maps to the corresponding TCP/IP layer.

A figure shows the layers of OSI and TCP/IP models.

FIGURE 8-7 OSI and TCP/IP Models

Attacks are getting very sophisticated and can evade detection of traditional systems and endpoint protection. Today, attackers have the resources, knowledge, and persistence to beat point-in-time detection. These solutions provide mitigation capabilities that go beyond point-in-time detection. It uses threat intelligence to perform retrospective analysis and protection. These malware protection solutions also provide device and file trajectory capabilities to allow a security administrator to analyze the full spectrum of an attack.

FYI: CCleaner Antivirus Supply Chain Backdoor

Security researchers at Cisco Talos found a backdoor that was included with version 5.33 of the CCleaner antivirus application. During the investigation and when analyzing the delivery code from the command and control server, they found references to several high-profile organizations including Cisco, Intel, VMWare, Sony, Samsung, HTC, Linksys, Microsoft, and Google Gmail that were specifically targeted through delivery of a second-stage loader. Based on a review of the command and control tracking database, they confirmed that at least 20 victims were served specialized secondary payloads. Interestingly, the array specified contains different domains of high-profile technology companies. This would suggest a very focused actor after valuable intellectual property.

Another example of potential supply chain attacks are the allegations against security products like the Kaspersky antivirus. The United States Department of Homeland Security (DHS) issued a Binding Operational Directive 17-01 strictly calling on all U.S. government departments and agencies to identify any use or presence of Kaspersky products on their information systems and to develop detailed plans to remove and discontinue present and future use of these products. This directive can be found at https://www.dhs.gov/news/2017/09/13/dhs-statement-issuance-binding-operational-directive-17-01.

In Practice

Malicious Software Policy

Synopsis: To ensure a companywide effort to prevent, detect, and contain malicious software.

Policy Statement:

  • The Office of Information Technology is responsible for recommending and implementing prevention, detection, and containment controls. At a minimum, antimalware software will be installed on all computer workstations and servers to prevent, detect, and contain malicious software.

  • Any system found to be out of date with vendor-supplied virus definition and/or detection engines must be documented and remediated immediately or disconnected from the network until it can be updated.

  • The Office of Human Resources is responsible for developing and implementing malware awareness and incident reporting training.

  • All malware-related incidents must be reported to the Office of Information Security.

  • The Office of Information Security is responsible for incident management.

Data Replication

The impact of malware, computer hardware failure, accidental deletion of data by users, and other eventualities is reduced with an effective data backup or replication process that includes periodic testing to ensure the integrity of the data as well as the efficiency of the procedures to restore that data in the production environment. Having multiple copies of data is essential for both data integrity and availability. Data replication is the process of copying data to a second location that is available for immediate or near-time use. Data backup is the process of copying and storing data that can be restored to its original location. A company that exists without a tested backup-and-restore or data replication solution is like a flying acrobat working without a net.

When you perform data replication, you copy and then move data between different sites. Data replication is typically measured as follows:

  • Recovery Time Objective (RTO): The targeted time frame in which a business process must be restored after a disruption or a disaster.

  • Recovery Point Objective (RPO): The maximum amount of time in which data might be lost from an organization due to a major incident.

Is There a Recommended Backup or Replication Strategy?

Making the decision to back up or to replicate, and how often, should be based on the impact of not being able to access the data either temporarily or permanently. Strategic, operational, financial, transactional, and regulatory requirements must be considered. You should consider several factors when designing a replication or data backup strategy. Reliability is paramount; speed and efficiency are also very important, as are simplicity, ease of use, and, of course, cost. These factors will all define the criteria for the type and frequency of the process.

Data backup strategies primarily focus on compliance and granular recovery—for example, recovering a document created a few months ago or a user’s email a few years ago.

Data replication and recovery focus on business continuity and the quick or easy resumption of operations after a disaster or corruption. One of the key benefits of data replication is minimizing the recovery time objective (RTO). Additionally, data backup is typically used for everything in the organization, from critical production servers to desktops and mobile devices. On the other hand, data replication is often used for mission-critical applications that must always be available and fully operational.

Backed-up or replicated data should be stored at an off-site location, in an environment where it is secure from theft, the elements, and natural disasters such as floods and fires. The backup strategy and associated procedures must be documented.

Figure 8-8 shows an example of data replication between two geographical locations. In this example, data stored at an office in New York, NY, is replicated to a site in Raleigh, North Carolina.

A figure shows the replication of data between two geographical locations.

FIGURE 8-8 Data Replication Between Two Geographical Locations

Organizations also can use data backups or replication to the cloud. Cloud storage refers to using Internet-based resources to store your data. A number of the cloud-based providers, such as Google, Amazon, Microsoft Azure, Box, Dropbox, and others offer scalable, affordable storage options that can be used in place of (or in addition to) local backup.

Figure 8-9 shows an example of an organization that has an office in San Juan, Puerto Rico, backing up its data in the cloud.

A figure shows a sample for the Cloud-Based Data Backup.

FIGURE 8-9 Cloud-Based Data Backup Example

Different data backup recovery types can be categorized as follows:

  • Traditional recovery

  • Enhanced recovery

  • Rapid recovery

  • Continuous availability

Figure 8-10 lists the benefits and elements of each data backup recovery type.

A figure lists the data backup types.

FIGURE 8-10 Data Backup Types

Understanding the Importance of Testing

The whole point of replicating or backing up data is that it can be accessed or restored if the data is lost or tampered with. In other words, the value of the backup or replication is the assurance that running a restore operation will yield success and that the data will again be available for production and business-critical application systems.

Just as proper attention must be paid to designing and testing the replication or backup solution, the accessibility or restore strategy must also be carefully designed and tested before being approved. Accessibility or restore procedures must be documented. The only way to know whether a replication or backup operation was successful and can be relied upon is to test it. It is recommended that testing access or restores of random files be conducted at least monthly.

In Practice

Data Replication Policy

Synopsis: Maintain the availability and integrity of data in the case of error, compromise, failure, or disaster.

Policy Statement:

  • The Office of Information Security is responsible for the design and oversight of the enterprise replication and backup strategy. Factors to be considered include but are not limited to impact, cost, and regulatory requirements.

  • Data contained on replicated or backup media will be protected at the same level of access control as the data on the originating system.

  • The Office of Information Technology is responsible for the implementation, maintenance, and ongoing monitoring of the replication and backup/restoration strategy.

  • The process must be documented.

  • The procedures must be tested on a scheduled basis.

  • Backup media no longer in rotation for any reason will be physically destroyed so that the data is unreadable by any means.

Secure Messaging

In 1971, Ray Tomlinson, a Department of Defense (DoD) researcher, sent the first ARPANET email message to himself. The ARPANET, the precursor to the Internet, was a United States (U.S.) Advanced Research Project Agency (ARPA) project intended to develop a set of communications protocols to transparently connect computing resources in various geographical locations. Messaging applications were available on ARPANET systems; however, they could be used only for sending messages to users with local system accounts. Tomlinson modified the existing messaging system so that users could send messages to users on other ARPANET-connected systems. After Tomlinson’s modification was available to other researchers, email quickly became the most heavily used application on the ARPANET. Security was given little consideration because the ARPANET was viewed as a trusted community.

Current email architecture is strikingly similar to the original design. Consequently, email servers, email clients, and users are vulnerable to exploit and are frequent targets of attack. Organizations need to implement controls that safeguard the CIA of email hosts and email clients. NIST Special Publication 800-177, Trustworthy Email, recommends security practices for improving the trustworthiness of email. NIST’s recommendations are aimed to help you reduce the risk of spoofed email being used as an attack vector and the risk of email contents being disclosed to unauthorized parties. The recommendations in the special publication apply to both the email sender and receiver.

What Makes Email a Security Risk?

When you send an email, the route it takes in transit is complex, with processing and sorting occurring at several intermediary locations before arriving at the final destination. In its native form, email is transmitted using clear-text protocols. It is almost impossible to know if anyone has read or manipulated your email in transit. Forwarding, copying, storing, and retrieving email is easy (and commonplace); preserving confidentiality of the contents and metadata is difficult. Additionally, email can be used to distribute malware and to exfiltrate company data.

Understanding Clear Text Transmission

Simple Mail Transfer Protocol (SMTP) is the de facto message transport standard for sending email messages. Jon Postel of the University of Southern California developed SMTP in August 1982. At the most basic level, SMTP is a minimal language that defines a communications protocol for delivering email messages. After a message is delivered, users need to access the mail server to retrieve the message. The two most widely supported mailbox access protocols are Post Office Protocol (now POP3), developed in 1984, and Internet Message Access Protocol (IMAP), developed in 1988. The designers never envisioned that someday email would be ubiquitous, and as with the original ARPANET communications, reliable message delivery, rather than security, was the focus. SMTP, POP, and IMAP are all clear-text protocols. This means that the delivery instructions (including access passwords) and email contents are transmitted in a human readable form. Information sent in clear text may be captured and read by third parties, resulting in a breach of confidentiality. Information sent in clear text may be captured and manipulated by third parties, resulting in a breach of integrity.

Encryption protocols can be used to protect both authentication and contents. Encryption protects the privacy of the message by converting it from (readable) plain text into (scrambled) cipher text. Late implementation of POP and IMAP support encryption. RFC 2595, “Using TLS with IMAP, POP3 and ACAP” introduces the use of encryption in these popular email standards.

We examine encryption protocols in depth in Chapter 10, “Information Systems Acquisition, Development, and Maintenance.” Encrypted email is often referred to as “secure email.” As we discussed in Chapter 5, “Asset Management and Data Loss Prevention,” email-handling standards should specify the email encryption requirements for each data classification. Most email encryption utilities can be configured to auto-encrypt based on preset criteria, including content, recipient, and email domain.

Understanding Metadata

Documents sent as email attachments or via any other communication or collaboration tools might contain more information than the sender intended to share. The files created by many office programs contain hidden information about the creator of the document, and may even include some content that has been reformatted, deleted, or hidden. This information is known as metadata.

Keep this in mind in the following situations:

  • If you recycle documents by making changes and sending them to new recipients (that is, using a boilerplate contract or a sales proposal).

  • If you use a document created by another person. In programs such as Microsoft Office, the document might list the original person as the author.

  • If you use a feature for tracking changes. Be sure to accept or reject changes, not just hide the revisions.

Understanding Embedded Malware

Email is an effective method to attack and ultimately infiltrate an organization. Common mechanisms include embedding malware in an attachment and directing the recipient to click a hyperlink that connects to a malware distribution site (unbeknownst to the user). Increasingly, attackers are using email to deliver zero-day attacks at targeted organizations. A zero-day exploit is one that takes advantage of a security vulnerability on the same day that the vulnerability becomes publicly or generally known.

Malware can easily be embedded in common attachments, such as PDF, Word, and Excel files, or even a picture. Not allowing any attachments would simplify email security; however, it would dramatically reduce the usefulness of email. Determining which types of attachments to allow and which to filter out must be an organizational decision. Filtering is a mail server function and is based on the file type. The effectiveness of filtering is limited because attackers can modify the file extension. In keeping with a defense-in-depth approach, allowed attachments should be scanned for malware at the mail gateway, email server, and email client.

A hyperlink is a word, phrase, or image that is programmatically configured to connect to another document, bookmark, or location. Hyperlinks have two components—the text to display (such as www.goodplace.com) and the connection instructions. Genuine-looking hyperlinks are used to trick email recipients into connecting to malware distribution sites. Most email client applications have the option to disable active hyperlinks. The challenge here is that hyperlinks are often legitimately used to direct the recipient to additional information. In both cases, users need to be taught to not click on links or open any attachment associated with an unsolicited, unexpected, or even mildly suspicious email.

Controlling Access to Personal Email Applications

Access to personal email accounts should not be allowed from a corporate network. Email that is delivered via personal email applications such as Gmail bypass all the controls that the company has invested in, such as email filtering and scanning. A fair comparison would be that you install a lock, lights, and an alarm system on the front door of your home but choose to leave the back door wide open all the time based on the assumption that the back door is really just used occasionally for friends and family.

In addition to outside threats, consideration needs to be given to both the malicious and unintentional insider threat. If an employee decides to correspond with a customer via personal email, or if an employee chooses to exfiltrate information and send it via personal email, there would be no record of the activity. From both an HR and a forensic perspective, this would hamper an investigation and subsequent response.

Understanding Hoaxes

Every year, a vast amount of money is lost, in the form of support costs and equipment workload, due to hoaxes sent by email. A hoax is a deliberately fabricated falsehood. An email hoax may be a fake virus warning or false information of a political or legal nature and often borders on criminal mischief. Some hoaxes ask recipients to take action that turns out to be damaging—deleting supposedly malicious files from their local computer, sending uninvited mail, randomly boycotting organizations for falsified reasons, or defaming an individual or group by forwarding the message.

Understanding the Risks Introduced by User Error

The three most common user errors that impact the confidentiality of email are sending email to the wrong person, choosing Reply All instead of Reply, and using Forward inappropriately.

It is easy to mistakenly send email to the wrong address. This is especially true with email clients that autocomplete addresses based on the first three or four characters entered. All users must be made aware of this and must pay strict attention to the email address entered in the To field, along with the CC and BCC fields when used.

The consequence of choosing Reply All instead of Reply can be significant. The best-case scenario is embarrassment. In the worst cases, confidentiality is violated by distributing information to those who do not have a “need to know.” In regulated sectors such as health care and banking, violating the privacy of patients and/or clients is against the law.

Forwarding has similar implications. Assume that two people have been emailing back and forth using the Reply function. Their entire conversation can be found online. Now suppose that one of them decides that something in the last email is of interest to a third person and forwards the email. In reality, what that person just did was forward the entire thread of emails that had been exchanged between the two original people. This may well have not been the person’s intent and may violate the privacy of the other original correspondent.

Are Email Servers at Risk?

Email servers are hosts that deliver, forward, and store email. Email servers are attractive targets because they are a conduit between the Internet and the internal network. Protecting an email server from compromise involves hardening the underlying operating system, the email server application, and the network to prevent malicious entities from directly attacking the mail server. Email servers should be single-purpose hosts, and all other services should be disabled or removed. Email server threats include relay abuse and DoS attacks.

Understanding Relay Abuse and Blacklisting

The role of an email server is to process and relay email. The default posture for many email servers is to process and relay any mail sent to the server. This is known as open mail relay. The ability to relay mail through a server can (and often is) taken advantage of by those who benefit from the illegal use of the resource. Criminals conduct Internet searches for email servers configured to allow relay. After they locate an open relay server, they use it for distributing spam and malware. The email appears to come from the company whose email server was misappropriated. Criminals use this technique to hide their identity. This is not only an embarrassment but can also result in legal and productivity ramifications.

In a response to the deluge of spam and email malware distribution, blacklisting has become a standard practice. A blacklist is a list of email addresses, domain names, or IP addresses known to send unsolicited commercial email (spam) or email-embedded malware. The process of blacklisting is to use the blacklist as an email filter. The receiving email server checks the incoming emails against the blacklist, and when a match is found, the email is denied.

Understanding Denial of Service Attacks

The SMTP protocol is especially vulnerable to DDoS attacks because, by design, it accepts and queues incoming emails. To mitigate the effects of email DoS attacks, the mail server can be configured to limit the amount of operating system resources it can consume. Some examples include configuring the mail server application so that it cannot consume all available space on its hard drives or partitions, limiting the size of attachments that are allowed, and ensuring log files are stored in a location that is sized appropriately.

Other Collaboration and Communication Tools

In addition, nowadays organizations use more than just email. Many organizations use Slack, Cisco Spark, WebEx, Telepresence, and many other collaboration tools that provide a way for internal communications. Most of these services or products provide different encryption capabilities. This includes encryption during the transit of the data and encryption of the data at rest. Most of these are also cloud services. You must have a good strategy when securing and understanding the risk of each of these solutions, including knowing the risks that you can control and the ones that you cannot.

Are Collaboration and Communication Services at Risk?

Absolutely! Just like email, collaboration tools like WebEx, Slack, and others need to be evaluated. This is why the United States Federal Government created the Federal Risk and Authorization Management Program, or FedRAMP. FedRAMP is a program that specifies a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. This includes cloud services such as Cisco WebEx.

According to its website (https://www.fedramp.gov) the following are the goals of FedRAMP:

  • Accelerate the adoption of secure cloud solutions through reuse of assessments and authorizations

  • Increase confidence in security of cloud solutions

  • Achieve consistent security authorizations using a baseline set of agreed-upon standards to be used for cloud product approval in or outside of FedRAMP

  • Ensure consistent application of existing security practice

  • Increase confidence in security assessments

  • Increase automation and near real-time data for continuous monitoring

Also as defined in its website, the following are the benefits of FedRAMP:

  • Increase re-use of existing security assessments across agencies

  • Save significant cost, time, and resources—“do once, use many times”

  • Improve real-time security visibility

  • Provide a uniform approach to risk-based management

  • Enhance transparency between government and Cloud Service Providers (CSPs)

  • Improve the trustworthiness, reliability, consistency, and quality of the Federal security authorization process

In Practice

Email and Email Systems Security Policy

Synopsis: To recognize that email and messaging platforms are vulnerable to unauthorized disclosure and attack, and to assign responsibility to safeguarding said systems.

Policy Statement:

  • The Office of Information Security is responsible for assessing the risk associated with email and email systems. Risk assessments must be performed at a minimum biannually or whenever there is a change trigger.

  • The Office of Information Security is responsible for creating email security standards, including but not limited to attachment and content filtering, encryption, malware inspection, and DDoS mitigation.

  • External transmission of data classified as “protected” or “confidential” must be encrypted.

  • Remote access to company email must conform to the corporate remote access standards.

  • Access to personal web-based email from the corporate network is not allowed.

  • The Office of Information Technology is responsible for implementing, maintaining, and monitoring appropriate controls.

  • The Office of Human Resources is responsible for providing email security user training.

Activity Monitoring and Log Analysis

NIST defines a log as a record of the events occurring within an organization’s systems and networks. Logs are composed of log entries; each entry contains information related to a specific event that has occurred within a system or network. Security logs are generated by many sources, including security software, such as AV software, firewalls, and IDS/IPS systems; operating systems on servers, workstations, and networking equipment; and applications. Another example of “records” from network activity is NetFlow. NetFlow was initially created for billing and accounting of network traffic and to measure other IP traffic characteristics, such as bandwidth utilization and application performance. NetFlow has also been used as a network-capacity planning tool and to monitor network availability. Nowadays, NetFlow is used as a network security tool because its reporting capabilities provide nonrepudiation, anomaly detection, and investigative capabilities. As network traffic traverses a NetFlow-enabled device, the device collects traffic flow information and provides a network administrator or security professional with detailed information about such flows. The Internet Protocol Flow Information Export (IPFIX) is a network flow standard led by the Internet Engineering Task Force (IETF). IPFIX was created to create a common, universal standard of export for flow information from routers, switches, firewalls, and other infrastructure devices. IPFIX defines how flow information should be formatted and transferred from an exporter to a collector.

Logs are a key resource when performing auditing and forensic analysis, supporting internal investigations, establishing baselines, and identifying operational trends and long-term problems. Routine log analysis is beneficial for identifying security incidents, policy violations, fraudulent activity, and operational problems. Third-party security specialists should be engaged for log analysis if in-house knowledge is not sufficient.

Big data analytics is the practice of studying large amounts of data of a variety of types and a variety of courses to learn interesting patterns, unknown facts, and other useful information. Big data analytics can play a crucial role in cybersecurity. Many in the industry are changing the tone of their conversation, saying that it is no longer if or when your network will be compromised, but the assumption is that your network has already been hacked or compromised. They suggest focusing on minimizing the damage and increasing visibility to aid in identification of the next hack or compromise.

Advanced analytics can be run against very large diverse data sets to find indicators of compromise (IOCs). These data sets can include different types of structured and unstructured data processed in a “streaming” fashion or in batches. Any organization can collect data just for the sake of collecting data; however, the usefulness of such data depends on how actionable such data is to make any decisions (in addition to whether the data is regularly monitored and analyzed).

What Is Log Management?

Log management activities involve configuring the log sources, including log generation, storage, and security, performing analysis of log data, initiating appropriate responses to identified events, and managing the long-term storage of log data. Log management infrastructures are typically based on one of the two major categories of log management software: syslog-based centralized logging software and security information and event management software (SIEM). Syslog provides an open framework based on message type and severity. Security information and event management (SIEM) software includes commercial applications and often uses proprietary processes. NIST Special Publication SP 800-92, Guide to Computer Security Log Management, published September 2006, provides practical, real-world guidance on developing, implementing, and maintaining effective log management practices throughout an enterprise. The guidance in SP 800-92 covers several topics, including establishing a log management infrastructure.

Prioritizing and Selecting Which Data to Log

Ideally, data would be collected from every significant device and application on the network. The challenge is that network devices and applications can generate hundreds of events per minute. A network with even a small number of devices can generate millions of events per day. The sheer volume can overwhelm a log management program. Prioritization and inclusion decisions should be based on system or device criticality, data protection requirements, vulnerability to exploit, and regulatory requirements. For example, websites and servers that serve as the public face of the company are vulnerable specifically because they are Internet accessible. E-commerce application and database servers may drive the company’s revenue and are targeted because they contain valuable information, such as credit card information. Internal devices are required for day-to-day productivity; access makes them vulnerable to insider attacks. In addition to identifying suspicious activity, attacks, and compromises, log data can be used to better understand normal activity, provide operational oversight, and provide a historical record of activity. The decision-making process should include information system owners as well as information security, compliance, legal, HR, and IT personnel.

Systems within an IT infrastructure are often configured to generate and send information every time a specific event happens. An event, as described in NIST SP 800-61 revision 2, “Computer Security Incident Handling Guide,” is any observable occurrence in a system or network, whereas a security incident is an event that violates the security policy of an organization. One important task of a security operation center analyst is to determine when an event constitutes a security incident. An event log (or simply a log) is a formal record of an event and includes information about the event itself. For example, a log may contain a timestamp, an IP address, an error code, and so on.

Event management includes administrative, physical, and technical controls that allow for the proper collection, storage, and analysis of events. Event management plays a key role in information security because it allows for the detection and investigation of a real-time attack, enables incident response, and allows for statistical and trending reporting. If an organization lacks information about past events and logs, this may reduce its ability to investigate incidents and perform a root-cause analysis.

An additional important function of monitoring and event management is compliance. Many compliance frameworks (for example, ISO and PCI DSS) mandate log management controls and practices. One of the most basic tasks of event management is log collection. Many systems in the IT infrastructure are in fact capable of generating logs and sending them to a remote system that will store them. Log storage is a critical task for maintaining log confidentiality and integrity. Confidentiality is needed because the logs may contain sensitive information. In some scenarios, logs may need to be used as evidence in court or as part of an incident response. The integrity of the logs is fundamental for them to be used as evidence and for attribution.

The facilities used to store logs need to be protected against unauthorized access, and the logs’ integrity should be maintained. Enough storage should be allocated so that the logs are not missed due to lack of storage.

The information collected via logs usually includes, but is not limited to, the following:

  • User ID

  • System activities

  • Timestamps

  • Successful or unsuccessful access attempts

  • Configuration changes

  • Network addresses and protocols

  • File access activities

Different systems may send their log messages in various formats, depending on their implementation.

According to NIST SP 800-92, “Guide to Computer Security Log Management,” three categories of logs are of interest for security professionals:

  • Logs generated by security software: This includes logs and alerts generated by the following software and devices:

    • Antivirus/antimalware

    • IPS and IDS

    • Web proxies

    • Remote access software

    • Vulnerability management software

    • Authentication servers

    • Infrastructure devices (including firewalls, routers, switches, and wireless access points)

  • Logs generated by the operating system: This includes the following:

    • System events

    • Audit records

  • Logs generated by applications: This includes the following:

    • Connection and session information

    • Usage information

    • Significant operational action

Once collected, the logs need to be analyzed and reviewed to detect security incidents and to make sure security controls are working properly. This is not a trivial task, because the analyst may need to analyze an enormous amount of data. It is important for the security professional to understand which logs are relevant and should be collected for the purpose of security administration and event and incident management.

Systems that are used to collect and store the logs usually offer a management interface through which the security analyst is able to view the logs in an organized way, filter out unnecessary entries, and produce historical reporting. At some point, logs may not be needed anymore. The determination of how long a log needs to be kept is included in the log retention policy. Logs can be deleted from the system or archived in separate systems. One of the most used protocols for event notification is syslog, which is defined in RFC 5424.

The syslog protocol specifies three main entities:

  • Originator: The entity that generates a syslog message (for example, a router).

  • Collector: The entity that receives information about an event in syslog format (for example, a syslog server).

  • Relay: An entity that can receive messages from originators and forward them to other relays or collectors.

What Are Security Information and Event Managers?

The Security Information and Event Manager (SIEM) is a specialized device or software for security event management. It typically allows for the following functions:

  • Log collection: This includes receiving information from devices with multiple protocols and formats, storing the logs, and providing historical reporting and log filtering.

  • Log normalization: This function extracts relevant attributes from logs received in different formats and stores them in a common data model or template. This allows for faster event classification and operations. Non-normalized logs are usually kept for archive, historical, and forensic purposes.

  • Log aggregation: This function aggregates information based on common information and reduces duplicates.

  • Log correlation: This is probably one of most important functions of an SIEM. It refers to the ability of the system to associate events gathered by various systems, in different formats and at different times, and create a single actionable event for the security analyst or investigator. Often the quality of an SIEM is related to the quality of its correlation engine.

  • Reporting: Event visibility is also a key functionality of an SIEM. Reporting capabilities usually include real-time monitoring and historical base reports.

Most modern SIEMs also integrate with other information systems to gather additional contextual information to feed the correlation engine. For example, they can integrate with an identity management system to get contextual information about users or with NetFlow collectors to get additional flow-based information.

Analyzing Logs

Done correctly and consistently, log analysis is a reliable and accurate way to discover potential threats, identify malicious activity, and provide operational oversight. Log analysis techniques include correlation, sequencing, signature, and trend analysis:

  • Correlation ties individual log entries together based on related information.

  • Sequencing examines activity based on patterns.

  • Signature compares log data to “known bad” activity.

  • Trend analysis identifies activity over time that in isolation might appear normal.

A common mistake made when analyzing logs is to focus on “denied” activity. Although it is important to know what was denied, it is much more important to focus on allowed activity that may put the organization at risk.

FYI: Log Review Regulatory Requirements and Contractual Obligations

Monitoring event and audit logs is an integral part of complying with a variety of federal regulations, including the Gramm-Leach-Bliley Act. In addition, as of July 2013, at least 48 states and U.S. territories have instituted security breach notification laws that require businesses to monitor and protect specific sets of consumer data:

  • Gramm-Leach-Bliley Act (GLBA) requires financial institutions to protect their customers’ information against security threats. Log management can be helpful in identifying possible security violations and resolving them effectively.

  • Health Insurance Portability and Accountability Act of 1996 (HIPAA) includes security standards for certain health information, including the need to perform regular reviews of audit logs and access reports. Section 4.22 specifies that documentation of actions and activities needs to be retained for at least six years.

  • Federal Information Security Management Act of 2002 (FISMA) requirements found in NIST SP 800-53, Recommended Security Controls for Federal Information Systems, describes several controls related to log management, including the generation, review, protection, and retention of audit records, as well as the actions to be taken because of audit failure.

  • Payment Card Industry Data Security Standard (PCI DSS) applies to organizations that store, process, or transmit cardholder data for payment cards. The fifth core PCI DSS principle, Regulatory Monitor and Test Networks, includes the requirement to track and monitor all access to network resources and cardholder data.

Firewall logs can be used to detect security threats, such as network intrusion, virus attacks, DoS attacks, anomalous behavior, employee web activities, web traffic analysis, and malicious insider activity. Reviewing log data provides oversight of firewall administrative activity and change management, including an audit trail of firewall configuration changes. Bandwidth monitoring can provide information about sudden changes that may be indicative of an attack.

Web server logs are another rich source of data to identify and thwart malicious activity. HTTP status codes indicating redirection, client error, or server error can indicate malicious activity as well as malfunctioning applications or bad HTML code. Checking the logs for Null Referrers can identify hackers who are scanning the website with automated tools that do not follow proper protocols. Log data can also be used to identify web attacks, including SQL injection, cross-site scripting (XSS), and directory traversal. As with the firewall, reviewing web server log data provides oversight of web server/website administrative activity and change management, including an audit trail of configuration changes.

Authentication server logs document user, group, and administrative account activity. Activity that should be mined and analyzed includes account lockouts, invalid account logons, invalid passwords, password changes, and user management changes, including new accounts and changed accounts, computer management events (such as when audit logs are cleared or computer account names are changed), group management events (such as the creation or deletion of groups and the addition of users to high-security groups), and user activity outside of logon time restrictions. Operational activity, such as the installation of new software, the success/failure of patch management, server reboots, and policy changes, should be on the radar as well.

In Practice

Security Log Management Policy

Synopsis: To require that devices, systems, and applications support logging and to assign responsibility for log management.

Policy Statement:

  • Devices, systems, and applications implemented by the company must support the capability to log activities, including data access and configuration modifications. Exceptions must be approved by the COO.

  • Access to logs must be restricted to individuals with a need to know.

  • Logs must be retained for a period of 12 months.

  • Log analysis reports must be retained for 36 months.

  • The Office of Information Security is responsible for the following:

    • Developing log management standards, procedures, and guidelines

    • Prioritizing log management appropriately throughout the organization

    • Creating and maintaining a secure log management infrastructure

    • Establishing log analysis incident response procedures

    • Providing proper training for all staff with log management responsibilities

  • The Office of Information Technology is responsible for the following:

    • Managing and monitoring the log management infrastructure

    • Proactively analyzing log data to identify ongoing activity and signs of impending problems

    • Providing reports to the Office of Information Security

Service Provider Oversight

Many companies outsource some aspect of their operations. These relationships, however beneficial, have the potential to introduce vulnerabilities. From a regulatory perspective, you can outsource the work, but you cannot outsource the legal responsibility. Organizational CIA requirements must extend to all service providers and business partners that store, process, transmit, or access company data and information systems. Third-party controls must be required to meet or, in some cases, exceed internal requirements. When working with service providers, organizations need to exercise due diligence in selecting providers, contractually obligate providers to implement appropriate security controls, and monitor service providers for ongoing compliance with contractual obligations.

FYI: Strengthening the Resilience of Outsourced Technology Services

The Federal Financial Institutions Examination Council (FFIEC) Information Technology Examination Handbook (IT Handbook) Business Continuity Booklet, Appendix J, “Strengthening the Resilience of Outsourced Technology Services,” provides guidance and examination procedures to assist examiners and bankers in evaluating a financial institution’s risk management processes to establish, manage, and monitor IT outsourcing and third-party relationships. However, the guidance is useful for organizations of all types and sizes. A number of the recommendations in this section are from the FFIEC guidance. To download the booklet from the FFIEC site, visit https://ithandbook.ffiec.gov/it-booklets/business-continuity-planning/appendix-j-strengthening-the-resilience-of-outsourced-technology-services.aspx.

What Is Due Diligence?

Vendor due diligence describes the process or methodology used to assess the adequacy of a service provider. The depth and formality of the due diligence performed may vary based on the risk of the outsourced relationship. Due diligence investigation may include the following:

  • Corporate history

  • Qualifications, backgrounds, and reputations of company principals

  • Financial status, including reviews of audited financial statements

  • Service delivery capability, status, and effectiveness

  • Technology and systems architecture

  • Internal controls environment, security history, and audit coverage

  • Legal and regulatory compliance, including any complaints, litigation, or regulatory actions

  • Reliance on and success in dealing with third-party service providers

  • Insurance coverage

  • Incident response capability

  • Disaster recovery and business continuity capability

Documentation requested from a service provider generally includes financial statements, security-related policies, proof of insurance, subcontractor disclosure, disaster recovery, and continuity of operations plan, incident notification, and response procedures, security testing results, and independent audit reports, such as an SSAE16.

Understanding Independent Audit Reports

The objective of an independent audit is to objectively evaluate the effectiveness of operational, security, and compliance controls. Standards for Attestation Engagements (SSAE) 18, known as SSAE18 audit reports, have become the most widely accepted due diligence documentation. SSAE18 was developed by the American Institute of CPAs (AICPA). The SSAE defines controls at a service organization (SOC). SOC reports specifically address one or more of the following five key system attributes:

  • Security: The system is protected against unauthorized access (both physical and logical).

  • Availability: The system is available for operation and use as committed or agreed.

  • Processing integrity: System processing is complete, accurate, timely, and authorized.

  • Confidentiality: Information designated as confidential is protected as committed or agreed.

  • Privacy: Personal information is collected, used, retained, disclosed, and disposed of in conformity with the commitments in the entity’s privacy notice, and with criteria set forth in Generally Accepted Privacy Principles (GAPP) issued by the AICPA and Canadian Institute of Chartered Accountants.

SSAE audits must be attested to by a certified public accounting (CPA) firm. SSAE Service organizations that had an SOC engagement within the past year may register with the AICPA to display the applicable logo.

What Should Be Included in Service Provider Contracts?

Service provider contracts should include a number of information security–related clauses, including performance standards, security and privacy compliance requirements, incident notification, business continuity, disaster recovery commitments, and auditing options. The objective is to ensure that the service provider exercises due care, which is the expectation that reasonable efforts will be made to avoid harm and minimize risk.

Performance standards define minimum service-level requirements and remedies for failure to meet standards in the contract—for example, system uptime, deadlines for completing batch processing, and number of processing errors. MTTR (mean time to repair) may be a clause condition in a service level agreement (SLA), along with a standard reference to Tier 1, Tier 2, and Tier 3 performance factors. All support service requests begin in Tier 1. This is where the issue is identified, triaged, and initially documented. Any support service requests that cannot be resolved with Tier 1 support are escalated to Tier 2. Advanced support staff is assigned for higher level troubleshooting of software or hardware issues. Similarly, any support service requests that cannot be resolved with Tier 2 support are escalated to Tier 3. How effective your staff is at each tier should be measured and analyzed to improve performance.

Security and privacy compliance requirements address the service provider stewardship of information and information systems, as well as organizational processes, strategies, and plans. At a minimum, the service provider control environment should be consistent with organizational policies and standards. The agreement should prohibit the service provider and its agents from using or disclosing the information, except as necessary for or consistent with providing the contracted services, and to protect against unauthorized use. If the service provider stores, processes, receives, or accesses nonpublic personal information (NPPI), the contract should state that the service provider will comply with all applicable security and privacy regulations.

Incident notification requirements should be clearly spelled out. In keeping with state breach notification laws, unless otherwise instructed by law enforcement, the service provider must disclose both verified security breaches and suspected incidents. The latter is often a point of contention. The contract should specify the time frame for reporting, as well as the type of information that must be included in the incident report.

Last, the contract should include the types of audit reports it is entitled to receive (for example, financial, internal control, and security reviews). The contract should specify the audit frequency, any charges for obtaining the audits, as well as the rights to obtain the results of the audits in a timely manner. The contract may also specify rights to obtain documentation of the resolution of any deficiencies and to inspect the processing facilities and operating practices of the service provider. For Internet-related services, the contract should require periodic control reviews performed by an independent party with sufficient expertise. These reviews may include penetration testing, intrusion detection, reviews of firewall configuration, and other independent control reviews.

Managing Ongoing Monitoring

The due diligence is done, the contract is signed, and the service is being provided—but it’s not yet time to relax. Remember that you can outsource the work but not the responsibility. Ongoing monitoring should include the effectiveness of the service providers’ security controls, financial strength, ability to respond to changes in the regulatory environment, and the impact of external events. Business process owners should establish and maintain a professional relationship with key service provider personnel.

In Practice

Service Provider Management Policy

Synopsis: To establish the information security–related criteria for service provider relationships.

Policy Statement:

  • Service provider is defined as a vendor, contractor, business partner, or affiliate who stores, processes, transmits, or accesses company information or company information systems.

  • The Office of Risk Management is responsible for overseeing the selection, contract negotiations, and management of service providers.

  • The Office of Risk Management will be responsible for conducting applicable service provider risk assessments.

  • Due diligence research must be conducted on all service providers. Depending on risk assessment results, due diligence research may include but is not limited to the following:

    • Financial soundness review

    • Internal information security policy and control environment review

    • Review of any industry standard audit and/or testing of information security–related controls

  • Service provider systems are required to meet or exceed internal security requirements. Safeguards must be commensurate with the classification of data and the level of inherent risk.

  • Contracts and/or agreements with service providers will specifically require them to protect the CIA of all company, customer, and proprietary information that is under their control.

  • Contracts and/or agreements must include notification requirements for suspected or actual compromise or system breach.

  • Contracts and/or agreements must include the service provider’s obligation to comply with all applicable state and federal regulations.

  • As applicable, contracts and/or agreements must include a clause related to the proper destruction of records containing customer or proprietary information when no longer in use or if the relationship is terminated.

  • Contracts and/or agreements must include provisions that allow for periodic security reviews/audits of the service provider environment.

  • Contracts and/or agreements must include a provision requiring service providers to disclose the use of contractors.

  • To the extent possible and practical, contractual performance will be monitored and/or verified. Oversight is the responsibility of the business process owner.

FYI: Small Business Note

The majority of small businesses do not have dedicated IT or information security staff. They rely on outside organizations or contractors to perform a wide range of tasks, including procurement, network management and administration, web design, and off-site hosting. Rarely are the “IT guys” properly vetted. A common small business owner remark is, “I wouldn’t even know what to ask. I don’t know anything about technology.” Rather than being intimidated, small business owners and managers need to recognize that they have a responsibility to evaluate the credentials of everyone who has access to their information systems. Peer and industry groups such as the Chamber of Commerce, Rotary, ISC2, and ISACA chapters can all be a source for references and recommendations. As with any service provider, responsibilities and expectations should be codified in a contract.

Threat Intelligence and Information Sharing

It is very common that organizations use threat intelligence to better know how threat actors carry out their attacks and to gain insights about the current threat landscape. Threat intelligence and cybersecurity are relatively new concepts; the use of intelligence to learn how the enemy is operating is a very old concept. Adopting intelligence to the field of cybersecurity makes complete sense, mainly because now the threat landscape is so broad and the adversaries vary widely, from state-sponsored actors to cybercriminals extorting money from their victims.

Threat intelligence can be used to understand which attack profile is most likely to target your organization. For example, a hacktivist group may be against you if your organization supports certain social or political tendencies. By using threat intelligence, you also would like to understand what assets that you own are most likely desired by the threat actor. You may also be able to take advantage of threat intelligence to scope data based on the adversary. If you have a full understanding of the types of assets that you are trying to protect, it can also help identify the threat actors that you should be worried about. The information that you obtain from threat intelligence can be categorized as

  • Technical

  • Tactical

  • Operational

  • Strategical

FYI: Open Source Intelligence (OSINT)

Various commercial threat intelligence companies provide threat intelligence feeds to their customers. However, there are also free open source feeds and publicly available sources. I have published a GitHub repository that includes several open source intelligence (OSINT) resources at https://github.com/The-Art-of-Hacking/art-of-hacking/tree/master/osint. The same GitHub repository includes numerous cybersecurity and ethical hacking references.

The Cisco Computer Security Incident Response Team (CSIRT) created an open source tool that can be used for collecting, processing, and exporting high-quality indicators of compromise (IOCs) called GOSINT. GOSINT allows a security analyst to collect and standardize structured and unstructured threat intelligence. The tool and additional documentation can be found at https://github.com/ciscocsirt/gosint.

How Good Is Cyber Threat Intelligence if It Cannot Be Shared?

No organization can have enough information to create and maintain accurate situational awareness of the cyber threat landscape. The sharing of relevant cyber threat information among trusted partners and communities is a must to effectively defend your organization. Through cyber threat intelligence information sharing, organizations and industry peers can achieve a more complete understanding of the threats they face and how to defeat them.

Trust is the major barrier among organizations to effectively and openly share threat intelligence with one another. This is why the Information Sharing and Analysis Centers (ISACs) were created. According to the National Council of ISACs, each ISAC will “collect, analyze and disseminate actionable threat information to their members and provide members with tools to mitigate risks and enhance resiliency.”

ISACs were created after the Presidential Decision Directive-63 (PDD-63), signed May 22, 1998. The United States federal government asked each critical infrastructure sector to establish sector-specific organizations to share information about threats and vulnerabilities. Most ISACs have threat warning and incident reporting capabilities. The following are examples of the different ISACs that exist today, along with a link to their website:

FYI: Technical Standards for Cyber Threat Intelligence Sharing

There are technical standards that define a set of information representations and protocols to model, analyze, and share cyber threat intelligence. Standardized representations have been created to exchange information about cyber campaigns, threat actors, incidents, tactics techniques and procedures (TTPs), indicators, exploit targets, observables, and courses of action. Two of the most popular standards for cyber threat intelligence information exchange are the Structured Threat Information Expression (STIX) and the Trusted Automated Exchange of Indicator Information (TAXII). Additional information about STIX and TAXII can be obtained at https://oasis-open.github.io/cti-documentation.

Another related standard is the OASIS Open Command and Control (OpenC2). The standard documents and provides specifications, lexicons, and other artifacts to describe cybersecurity command and control (C2) in a standardized manner. Additional information about OpenC2 can be obtained at https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=openc2.

Summary

This security domain is all about day-to-day operational activities. We started the chapter by looking at SOPs. We discussed that well-written SOPs provide direction, improve communication, reduce training time, and improve work consistency. Routine procedures that are short and require few decisions can be written using the simple step format. Long procedures consisting of more than 10 steps, with few decisions, should be written in hierarchical steps format or in a graphic format. Procedures that require many decisions should be written in the form of a flowchart.

Organizations are dynamic, and change is inevitable. The objective of change control is to ensure that only authorized changes are made to software, hardware, network access privileges, or business processes. A change management process establishes an orderly and effective mechanism for submission, evaluation, approval, prioritization, scheduling, communication, implementation, monitoring, and organizational acceptance of change.

Two mandatory components of a change management process are an RFC (Request for Change) document and a change control plan. Scheduled changes can be exempt from the process as long as they have a preapproved procedure. A good example of this is patch management. A patch is software or code designed to fix a problem. Applying security patches is the primary method of fixing security vulnerabilities in software. Patch management is the process of scheduling, testing, approving, and applying security patches.

Criminals design malware, short for malicious software (or script or code), to exploit devices, operating systems, applications, and user vulnerabilities with the intent of disrupting computer operations, gathering sensitive information, or gaining unauthorized access. A zero-day exploit is one that takes advantage of security vulnerability on the same day that the vulnerability becomes publicly or generally known. Malware categorization is based on infection and propagation characteristics. A virus is malicious code that attaches to and becomes part of another program. A worm is a piece of malicious code that can spread from one computer to another without requiring a host file to infect. A Trojan is malicious code that masquerades as a legitimate benign application. Bots (also known as robots) are snippets of code designed to automate tasks and respond to instructions. An entire network of compromised devices is known as a botnet. Ransomware is a type of malware that takes a computer or its data hostage in an effort to extort money from victims. A rootkit is set of software tools that hides its presence in the lower layers of the operating system application layer, operating system kernel, or in the device BIOS with privileged access permissions. Spyware is a general term used to describe software that, without a user’s consent and/or knowledge, tracks Internet activity, such as searches and web surfing, collects data on personal habits, and displays advertisements. Hybrid malware is code that combines characteristics of multiple categories. A blended threat is when multiple variants of malware (worms, viruses, bots, and so on) are used in concert to exploit system vulnerabilities. An antimalware defense-in-depth arsenal includes both prevention and detection controls. The most familiar of these is AV software that is designed to detect, contain, and in some cases eliminate malicious software.

Malware, user error, and system failure are among the many threats that can render data unusable. Having multiple copies of data is essential for both data integrity and availability. Data replication is the process of copying data to a second location that is available for immediate or near-time use. Data backup is the process of copying and storing data that can be restored to its original location. In both cases, it is essential to have SOPs for both replication/backup and restoration/recovery. Restoration and recovery processes should be tested to ensure that they work as anticipated.

Email is a primary malware distribution channel. Criminals embed malware in attachments or include a hyperlink to a malware distribution site. Email systems need to be configured to scan for malware and to filter attachments. Users need to be trained not to click email links and not to open unexpected attachments. Organizations should also restrict access to personal web mail applications because they bypass internal email controls. Criminals take advantage of the inherent weaknesses in the email communication system.

Cloud services including collaboration and unified communications solutions are used by many organizations nowadays. Performing threat modeling of cloud services and understanding the risk of such services is crucial for any organization. Encryption protects the privacy of the message by converting it from (readable) plain text into (scrambled) cipher text. The default posture for many email servers is to process and relay any mail sent to the server; this feature is known as open mail relay. Criminals exploit open mail relay to distribute malware, spam, and illegal material such as pornography. A blacklist is a list of email addresses, domain names, or IP addresses known to be compromised or intentionally used as a distribution platform. The process of blacklisting is to use the blacklist as an email filter. Because email servers are Internet-facing and are open to receiving packets, they are easy targets for distributed denial of service (DDoS) attacks. The objective of a DDoS attack is to render the service inoperable.

Almost every device and application on a network can record activity. This record of events is known as a log. Logs can be processed either using standard syslog protocols or using SIEM applications. Syslog provides an open framework based on message type and severity. Security information and event management software (SIEM) are commercial applications and often use proprietary processes. Analysis techniques include correlation, sequencing, signature comparison, and trend analysis. Correlation ties individual log entries together based on related information. Sequencing examines activity based on patterns. Signature compares log data to “known bad” activity. Trend analysis identifies activity over time that in isolation might appear normal. The process of configuring the log sources, including log generation, storage, and security, performing analysis of log data, initiating appropriate responses to identified events, and managing the long-term storage of log data is known as log management.

Operational security extends to service providers. Service providers are vendors, contractors, business partners, and affiliates who store, process, transmit, or access company information or company information systems. Service provider internal controls should meet or exceed those of the contracting organization. The conventional wisdom (and in some cases, the regulatory requirement) is that you can outsource the work but not the liability. Due diligence describes the process or methodology used to assess the adequacy of a service provider. SSAE18 audit reports have become the most widely accepted due diligence documentation. SSAE18 reports are independent audits certified by CPA firms.

Service provider contracts should include a number of information security–related clauses, including performance standards, security and privacy compliance requirements, incident notification, business continuity and disaster recovery commitments, and auditing and ongoing monitoring options.

Threat intelligence can be used to understand which attack profile is most likely to target your organization. You may also be able to take advantage of threat intelligence to scope data based on the adversary. If you have a full understanding of the types of assets that you are trying to protect, it can also help identify the threat actors that you should be worried about. The sharing of relevant cyber threat information among trusted partners and communities is a must to effectively defend your organization. Through cyber threat intelligence information sharing, organizations and industry peers can achieve a more complete understanding of the threats they face and how to defeat them.

Test Your Skills

Multiple Choice Questions

1. Which of the following is true about documenting SOPs?

A. It promotes business continuity.

B. The documentation should be approved before publication and distribution.

C. Both A and B.

D. Neither A nor B.

2. Which of the following is an alternative name for SOPs?

A. System operating protocols

B. Standard operating protocols

C. Standard objective protocols

D. Standard objective procedures

3. After a procedure has been documented, it should be________.

A. reviewed, verified, and authorized before being published

B. triaged, tested, and authenticated before being published

C. reviewed, authorized, and archived before being published

D. reviewed, verified, and deleted before being published

4. The change control process starts with which of the following?

A. Budget

B. RFC submission

C. Vendor solicitation

D. Supervisor authorization

5. What is the most important message to share with the workforce about “change”?

A. The reason for the change

B. The cost of the change

C. Who approved the change

D. Management’s opinion of the change

6. When protecting SOP documentation, ____________ should be put in place to protect the integrity of the document from both unintentional error and malicious insiders.

A. access and version controls

B. access and authorization

C. triage functions and enforcement controls

D. access, log accounting, and parsing.

7. _____________ is an internal procedure by which authorized changes are made to software, hardware, network access privileges, or business processes.

A. Engineering management

B. Engineering control

C. Change management

D. Change control

8. Which of the following statements best describes a security patch?

A. A security patch is designed to fix a security vulnerability.

B. A security patch is designed to add security features.

C. A security patch is designed to add security warnings.

D. A security patch is designed to fix code functionality.

9. Which of the following is a component of an AV application?

A. Definition files

B. Handler

C. Patch

D. Virus

10. Which of the following statements best describes the testing of security patches?

A. Security patches should never be tested because waiting to deploy is dangerous.

B. Security patches should be tested prior to deployment, if possible.

C. Security patches should be tested one month after deployment.

D. Security patches should never be tested because they are tested by the vendor.

11. Which of the following operating systems are vulnerable to malware?

A. Apple OS only.

B. Android OS only.

C. Microsoft Windows OS only.

D. Malware is operating system–agnostic.

12. Which of the following terms best describes malware that is specifically designed to hide in the background and gather info over an extended period of time?

A. Trojan

B. APT

C. Ransomware

D. Zero-day exploit

13. A _________________ can spread from one computer to another without requiring a host file to infect.

A. virus

B. Trojan

C. worm

D. rootkit

14. _________________ wait for remote instructions and are often used in DDoS attacks.

A. APTs

B. Bots

C. DATs

D. Command and Control servers

15. Which of the following is a type of malware that takes a computer or its data hostage in an effort to extort money from victims?

A. Virus

B. Trojan

C. APT

D. Ransomware

16. Which of the following OSI Layers provides services for the transmission of bits over the data link?

A. Layer 1: Physical Layer

B. Layer 2: Data Link Layer

C. Layer 3: Network Layer

D. Layer 7: Application Layer

17. Which of the following OSI Layers includes services for end-to-end connection establishment and information delivery? For example, it includes error detection, retransmission capabilities, and multiplexing.

A. Layer 4: Transport Layer

B. Layer 2: Data Link Layer

C. Layer 3: Network Layer

D. Layer 7: Application Layer

18. Which of the following is the targeted time frame in which a business process must be restored after a disruption or a disaster?

A. Recovery time objective (RTO)

B. Recovery point objective (RPO)

C. Recovery trusted objective (RTO)

D. Recovery disruption objective (RDO)

19. Which of the following terms best describes the Department of Defense project to develop a set of communications protocols to transparently connect computing resources in various geographical locations?

A. DoDNet

B. ARPANET

C. EDUNET

D. USANET

20. Which of the following terms best describes the message transport protocol used for sending email messages?

A. SMTP

B. SMNP

C. POP3

D. MIME

21. In its native form, email is transmitted in _________.

A. cipher text

B. clear text

C. hypertext

D. meta text

22. Which of the following statements best describes how users should be trained to manage their email?

A. Users should click embedded email hyperlinks.

B. Users should open unexpected email attachments.

C. Users should access personal email from the office.

D. Users should delete unsolicited or unrecognized emails.

23. The default posture for many email servers is to process and relay any mail sent to the server. The ability to relay mail through a server can (and often is) taken advantage of by those who benefit from the illegal use of the resource. Which of the following are attractive to criminals to send unsolicited emails (spam)?

A. Open mail proxies

B. Open mail relays

C. Closed mail relays

D. Blacklist relay servers

24. NetFlow is used as a network security tool because __________.

A. its reporting capabilities provide nonrepudiation, anomaly detection, and investigative capabilities

B. it is better than IPFIX

C. it is better than SNMP

D. it is better than IPSEC

25. Which of the following statements best describes trend analysis?

A. Trend analysis is used to tie individual log entries together based on related information.

B. Trend analysis is used to examine activity over time that in isolation might appear normal.

C. Trend analysis is used to compare log data to known bad activity.

D. Trend analysis is used to identify malware only.

26. It is very common that organizations use threat intelligence to ______________.

A. maintain competitive advantage

B. configure their antivirus to be less invasive

C. hire new employees for their cybersecurity teams

D. better know how threat actors carry out their attacks and to gain insights about the current threat landscape

27. Which of the following is a standard used for cyber threat intelligence?

A. STIX

B. CSAF

C. XIT

D. TIX

28. SSAE18 audits must be attested to by a _____________.

A. Certified Information System Auditor (CISA)

B. Certified Public Accountant (CPA)

C. Certified Information Systems Manager (CISM)

D. Certified Information System Security Professional (CISSP)

29. Why were Information Sharing and Analysis Centers (ISACs) created?

A. To disclose vulnerabilities to the public

B. To disclose vulnerabilities to security researchers

C. To combat ransomware and perform reverse engineering

D. To effectively and openly share threat intelligence with one another

30. Which of the following reasons best describes why independent security testing is recommended?

A. Independent security testing is recommended because of the objectivity of the tester.

B. Independent security testing is recommended because of the expertise of the tester.

C. Independent security testing is recommended because of the experience of the tester.

D. All of the above.

Exercises

Exercise 8.1: Documenting Operating Procedures
  1. SOPs are not restricted to use in IT and information security. Cite three non-IT or security examples where SOP documentation is important.

  2. Choose a procedure that you are familiar enough with that you can write SOP documentation.

  3. Decide which format you are going to use to create the SOP document.

Exercise 8.2: Researching Email Security
  1. Does your personal email application you are currently using have an option for “secure messaging”? If so, describe the option. If not, how does this limit what you can send via email?

  2. Does the email application you are using have an option for “secure authentication” (this may be referred to as secure login or multifactor authentication)? If so, describe the option. If not, does this concern you?

  3. Does the email application scan for malware or block attachments? If so, describe the option. If not, what can you do to minimize the risk of malware infection?

Exercise 8.3: Researching Metadata
  1. Most applications include metadata in the document properties. What metadata does the word processing software you currently use track?

  2. Is there a way to remove the metadata from the document?

  3. Why would you want to remove metadata before distributing a document?

Exercise 8.4: Understanding Patch Management
  1. Do you install operating system or application security patches on your personal devices such as laptops, tablets, and smartphone? If yes, how often? If not, why not?

  2. What method do you use (for example, Windows Update)? Is the update automatic? What is the update schedule? If you do not install security patches, research and describe your options.

  3. Why is it sometimes necessary to reboot your device after applying security patches?

Exercise 8.5: Understanding Malware Corporate Account Takeovers
  1. Hundreds of small businesses across the country have been victims of corporate account takeovers. To learn more, visit the Krebs on Security blog, https://krebsonsecurity.com/category/smallbizvictims.

  2. Should financial institutions be required to warn small business customers of the dangers associated with cash management services such as ACH and wire transfers? Explain your reasoning.

  3. What would be the most effective method of teaching bank customers about corporate account takeover attacks?

Projects

Project 8.1: Performing Due Diligence with Data Replication and Backup Service Providers
  1. Do you store your schoolwork on your laptop? If not, where is the data stored? Write a memo explaining the consequences of losing your laptop, or if the alternate location or device becomes unavailable. Include the reasons why having a second copy will contribute to your success as a student. After you have finished step 2 of this project, complete the memo with your recommendations.

  2. Research “cloud-based” backup or replication options. Choose a service provider and answer the following questions:

    What service/service provider did you choose?

    How do you know they are reputable?

    What controls do they have in place to protect your data?

    Do they reveal where the data will be stored?

    Do they have an SSAE18 or equivalent audit report available for review?

    Do they have any certifications, such as McAfee Secure?

    How much will it cost?

    How often are you going to update the secondary copy?

    What do you need to do to test the restore/recovery process?

    How often will you test the restore/recovery process?

Project 8.2: Developing an Email and Malware Training Program

You are working as an information security intern for Best Regional Bank, who has asked you to develop a PowerPoint training module that explains the risks (including malware) associated with email. The target audience is all employees.

  1. Create an outline of the training to present to the training manager.

  2. The training manager likes your outline. She just learned that the company would be monitoring email to make sure that data classified as “protected” is not being sent insecurely and that access to personal web-based email is going to be restricted. You need to add these topics to your outline.

  3. Working from your outline, develop a PowerPoint training module. Be sure to include email “best practices.” Be prepared to present the training to your peers.

Project 8.3: Developing Change Control and SOPs

The Dean of Academics at ABC University has asked your class to design a change control process specifically for mid-semester faculty requests to modify the day, the time, or the location where their class meets. You need to do the following:

  1. Create an RFC form.

  2. Develop an authorization workflow that specifies who (for example, the department chair) needs to approve the change and in what order.

  3. Develop an SOP flowchart for faculty members to use that includes submitting the RFC, authorization workflow, and communication (for example, students, housekeeping, campus security, registrar).

Case Study

Using Log Data to Identify Indicators of Compromise

Log data offer clues about activities that have unexpected—and possibly harmful—consequences. The following parsed and normalized firewall log entries indicate a possible malware infection and data exfiltration. The entries show a workstation making connections to Internet address 93.177.168.141 and receiving and sending data over TCP port 16115.

id=firewall sn=xxxxxxxxxxxx time="2018-04-02 11:53:12 UTC"
fw=255.255.255.1 pri=6 c=262144

m=98 msg="Connection Opened" n=404916 src=10.1.1.1 (workstation)
:49427:X0 dst=93.177.168.141 :16115:X1 proto=tcp/16115

id=firewall sn=xxxxxxxxxxxx time="2018-04-02 11:53:29 UTC"
fw=255.255.255.1 pri=6 c=1024

m=537 msg="Connection Closed" n=539640 src=10.1.1.1 (workstation)
:49427:X0 dst=93.177.168.141 :16115:X1 proto=tcp/16115 sent=735 rcvd=442

id=firewall sn=xxxxxxxxxxxx time="2018-04-02 11:53:42 UTC"
fw=255.255.255.1 pri=6 c=262144

m=98 msg="Connection Opened" n=404949 src=10.1.1.1 (workstation)
:49430:X0 dst=93.177.168.141 :16115:X1 proto=tcp/16115

id=firewall sn=xxxxxxxxxxxx time="2018-04-02 11:54:30 UTC"
fw=255.255.255.1 pri=6 c=1024

m=537 msg="Connection Closed" n=539720 src=10.1.1.1 (workstation)
:49430:X0 dst=93.177.168.141 :16115:X1 proto=tcp/16115 sent=9925
rcvd=639

  1. Describe what is happening.

  2. Is the log information useful? Why or why not?

  3. Research the destination IP address (dst) and the protocol/port (proto) used for communication.

  4. Can you find any information that substantiates a malware infection and data exfiltration?

  5. What would you recommend as next steps?

References

Regulations Cited

“16 CFR Part 314: Standards for Safeguarding Customer Information; Final Rule, Federal Register,” accessed 04/2018, https://ithandbook.ffiec.gov/it-workprograms.aspx.

“Federal Information Security Management Act (FISMA),” accessed 04/2018, https://csrc.nist.gov/topics/laws-and-regulations/laws/fisma.

“Gramm-Leach-Bliley Act,” the official website of the Federal Trade Commission, Bureau of Consumer Protection Business Center, accessed 04/2018, https://www.ftc.gov/tips-advice/business-center/privacy-and-security/gramm-leach-bliley-act.

“DHS Statement on the Issuance of Binding Operational Directive 17-01,” accessed 04/2018, https://www.dhs.gov/news/2017/09/13/dhs-statement-issuance-binding-operational-directive-17-01.

“HIPAA Security Rule,” the official website of the Department of Health and Human Services, accessed 04/2018, https://www.hhs.gov/hipaa/for-professionals/security/index.html.

Other References

NIST Cybersecurity Framework version 1.11, accessed 04/2018, https://www.nist.gov/sites/default/files/documents/draft-cybersecurity-framework-v1.11.pdf.

NIST article “A Framework for Protecting Our Critical Infrastructure,” accessed 04/2018, https://www.nist.gov/blogs/taking-measure/framework-protecting-our-critical-infrastructure.

FFIEC IT Examination Handbook, accessed 04/2018, https://ithandbook.ffiec.gov/.

“ISO 5807:1985,” ISO, accessed 04/2018, https://www.iso.org/standard/11955.html.

NIST Special Publication 800-115: Technical Guide to Information Security Testing and Assessment, accessed 04/2018, https://csrc.nist.gov/publications/detail/sp/800-115/final.

“Project Documentation Guidelines, Virginia Tech,” accessed 04/2018, www.itplanning.org.vt.edu/pm/documentation.html.

“The State of Risk Oversight: An Overview of Enterprise Risk Management Practices,” American Institute of CPAs and NC State University, accessed 04/2018, https://www.aicpa.org/content/dam/aicpa/interestareas/businessindustryandgovernment/resources/erm/downloadabledocuments/aicpa-erm-research-study-2017.pdf.

Cisco Talos Ransomware Blog Posts, accessed 04/2018, http://blog.talosintelligence.com/search/label/ransomware.

Skoudis, Ed. Malware: Fighting Malicious Code, Prentice Hall, 2003.

Still, Michael, and Eric Charles McCreath. “DDoS Protections for SMTP Servers.” International Journal of Computer Science and Security, Volume 4, Issue 6, 2011.

“What Is the Difference: Viruses, Worms, Trojans, and Bots,” accessed 04/2018, https://www.cisco.com/c/en/us/about/security-center/virus-differences.html.

Wieringa, Douglas, Christopher Moore, and Valerie Barnes. Procedure Writing: Principles and Practices, Second Edition, Columbus, Ohio: Battelle Press, 1988.

“Mirai IoT Botnet Co-Authors Plead Guilty”, Krebs on Security, accessed 04/2018, https://krebsonsecurity.com/2017/12/mirai-iot-botnet-co-authors-plead-guilty/.

National Council of ISACs, accessed 04/2018, https://www.nationalisacs.org.

OASIS Cyber Threat Intelligence (CTI) Technical Committee, accessed 04/2018, https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=cti.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.41.187