Chapter 9
Domain 7: Systems and Application Security

As an SSCP, you may be called upon to be a member of an incident response team and take an active part in security investigations with regard to intrusion of malware, malicious code, as well as the exfiltration of information from corporate and enterprise networks. It is important for the SSCP candidate to understand the vocabulary used to describe the creation, distribution, and countermeasures related to malicious code.

As a security professional, your job description will undoubtedly include endpoint as well as network security in both working with network equipment and providing assistance in security investigations. Endpoints include any end of network connection that does not pass on or through information to another device. Endpoints include user workstations, network nodes such as printers or scanners or even point-of-sale devices, and portable devices such as tablets, cell phones, and other devices that contain data or are used to communicate between the end-user and the network. Each of these devices requires its own set of security controls that may require the security professional to install, ­calibrate, update, and monitor.

Big data and data warehouses, as well as virtual environments, each present challenges to the security professional. As an SSCP, you should be familiar with not only the terminology but also the type of attacks and mitigation techniques required in each environment.

Understand Malicious Code and Apply Countermeasures

Malicious code software is designed and intended to do some harm. You know from the discussion of risk analysis in Chapter 5 that risk is a threat that exploits a vulnerability. NISP Special Publication 800-30 revision 1, as illustrated in Figure 9.1, identified both a threat source and a threat action. Malicious code may be the result of any one of several threat sources:

Image described by caption and surrounding text.

Figure 9.1 Threat source and threat action as illustrated in NIST SP 800-30 revision 1

Reprinted courtesy of the National Institute of Standards and Technology, U.S. Department of Commerce. Not copyrightable in the United States.

  1. Hacker Hacker is a broad term that can refer to individuals who only want to disrupt normal operations to terrorists waging a type of cyber war against a target and anything in between. A hacker may be a highly skilled individual with a personal agenda to exfiltrate data such as credit card information for their own personal gain.
  2. Commercial Hacker (Comhacker) This is a type of hacker who may be hired by a third party to infiltrate a target for a specific agenda. This purpose of a commercial hacker may include stealing intellectual property, exfiltration of documents, changing information, and engaging in targeted disruption activities.
  3. Certified Ethical Hacker (CEH) A certified ethical hacker, also referred to as a white hat hacker, is paid to infiltrate systems, break applications, and create reports concerning their activities. These individuals conduct penetration tests under strict guidelines and contractual relationships for the benefit of their employer.
  4. Script Kiddie Usually an unskilled, inexperienced, immature hacker who utilizes hacking tools and scripts (thus, the term script kiddie) to generally attack sites or break into computer systems for fun and amusement.
  5. Cracker The controversial name claimed by both black hat and white hat computer system intruders. White hat intruders, generally known as certified ethical hackers, claim that only a black hat hacker should be referred to as a cracker.
  6. Nation State Countries around the world that sponsor cyber terrorism are known as nation states. A nation state may promote a well-organized and well-funded hacking and infiltration organization that may plant advanced persistent threats (APTs) in foreign government or foreign commercial enterprises for intelligence gathering purposes. Cyberwarfare is a term to describe the activities carried out by nation states.
  7. Hacktivist A hacktivist is a person or group that exploits a weakness in technology in order to draw attention to a personal message or agenda.
  8. Insider Attack An insider attack may be performed by a disgruntled employee, third-party contractor, or anyone with direct inside access to an organization's network or host workstations.

Each threat source may create a threat action. The threat action is the actual attack or threat that exploits a vulnerability of the target. For instance, a threat action used by a commercial hacker may be to infiltrate the chief executive officer's personal computer and access any information or documents concerning the release of the new product.

Another example of threat sources creating threat actions may be that of a hurricane as a threat source. A hurricane may provide a number of threats such as high winds, flooding, lightning strikes, creation of debris that blocks roads, and disruption of utilities. Each of these threat actions may exploit a vulnerability. High winds may exploit the weak construction of the roof of the building.

Each threat action requires some sort of vehicle and means of delivering the payload to the target. The pathway into a target that is used by an attacker is referred to as a vector. For instance, a hacktivist creates Trojan horse malware that is loaded onto a host computer and installs a key logger as well as a means of dialing home and reporting collecting information. The attack vector was that the Trojan horse was included on a USB drive that was intentionally left on the ground near an individual's parking place and viewed as a freebie by the finder. The finder then plugged the USB drive into their computer, and it immediately uploaded the Trojan horse malware. The hacktivist may have selected several other vectors of attack, which might include network penetration attempts, a watering hole attack, a phishing attack, an insider attack, and a botnet exploitation.

Malicious Code Terms and Concepts

It is important to understand a number of concepts and technical terms relating to malware. The following list includes technical terms and concepts relating to attacks, hacker tools, and types of malware.

  1. Spoofing Attack A spoofing attack is where the attacker appears to be someone or something else in order to mislead another person or device. Through this impersonation, the attacker may successfully launch a malicious attack.
  2. Spyware Spyware is software that is placed on the host computer and monitors actions and activities and often creates log of some sort. Spyware might also have some technique of sending the log to an external retrieval site.
  3. Illegal Botnet A network of compromised computers is called a botnet. Each infected computer is referred to as a bot. Computers can be infected through the use of malware. Users may be tricked into clicking links in emails, which may immediately insert the malware bot payload. A bot herder or bot master is usually the originator of the botnet or at the very least the current controller and operator. The most prevalent use of a botnet is for forwarding spam mail. The spam mail sender contracts with the bot herder and pays for the use of thousands of forwarding computers to send out spam.
  4. Legal Botnet Original botnets were linked by Internet relay chat (IRC), which was a number of Internet-connected computers communicating with other similar machines.
  5. Zombie A zombie is generally described as a compromised computer that may be ­controlled under remote control. A zombie could be a stand-alone computer or part of a much larger group, numbering in the hundreds or thousands of computers. Zombies are sometimes referred to as bots.
  6. Cache Poisoning Throughout a computer network there are a number of cache locations. Domain Name System (DNS) name servers and Address Resolution Protocol (ARP) make use of cache memory locations for short-term storage of information. Anytime erroneous information is placed into the cache or if the cache is corrupted, the term used is cache poisoning.
  7. Computer Worm A computer worm is a type of software that replicates itself without assistance. It may be installed by simply clicking a link in an email. A worm infects host computers as well as networks by leaving a copy of itself in each location or host machine. The primary use for a worm is to create a denial of service (DOS) attack.
  8. Keylogger A keylogger, also called a keystroke logger, is a malware program that records keystrokes. The program usually incorporates a method of transmitting the keystroke file to a remote location. The term keylogger also refers to a mechanical device that is inserted between the keyboard and the computer and performs the same keystroke interception activities.
  9. Malware Malware, sometimes referred to as malcode, is software specifically intended to cause harm. Malware is short for malicious software, while malcode refers to malicious code.
  10. Privilege Escalation Privilege escalation means a user or attacker acquires privileges they are not entitled to. Privileges are capabilities assigned to a system user. The higher the privileges, the more capabilities and more activities a user can perform within a system. For instance, system administrators have much higher privileges than end users. The goal of any attacker is privileged escalation. They may successfully intrude into a network and then progressively increase their privileges. The highest privilege level is generally called a “root privilege level.” This originated in the days of the Unix operating system when the root level was the highest level assessable within the operating system.
  11. Man in the Middle In a man in the middle attack, a malicious actor is inserted into a conversation. At least one side of the conversation believes that they are talking to the appropriate or original party. In some cases, both sides of the conversation are being intercepted by the man in the middle. In this case, each side believes they are talking to the other. Man in the middle attacks are as common as rogue access points at a coffee shop or airport intercepting wireless communications.
  12. Proof of Concept A test case, or prototype, is used to prove the veracity of an idea. In the case of malware, proof of concept would be used to illustrate that a specific attack works. It may also be used in reverse engineering to test concepts.
  13. Rootkit A rootkit is a very old attack where malicious software allows the attacker to take root control of an operating system. This type of malware disguises itself by appearing as authentic operating system software to hide from antivirus/anti-malware software. The rootkit grants the attacker high-level authority with the ability to change system parameters and may remotely execute files.
  14. Advanced Persistent Threat An advanced persistent threat refers to continuous hacking processes often carried out by rogue governments or nation states against other nations, organizations, or large businesses. In practice, malicious code or malware is placed on various end-user workstations, personal devices, hosts, and servers to provide long-term surveillance and exfiltration of information. Typically, this malware is quiet and may be hidden for long periods of time, sending information out only infrequently. An advance persistent threat is stealthy malware primarily used for the discovery and exfiltration of state or corporate confidential information and is rarely if ever used for disruptive purposes.
  1. Buffer Overflow A buffer overflow attack occurs when more data is placed into a memory location, referred to as a buffer, than the memory location can accept. In such cases, the data overflows onto adjacent memory storage, thus corrupting the other storage or causing a failure in the application.
  2. Pointer Overflow A pointer overflow attack is similar to a buffer overflow attack. The pointer is used to index the process within a process stack. The attacker attacks the pointer through buffer overflow techniques to change it to point at the malicious code.
  3. Cross-Site Request Forgery Cross-Site Request Forgery (CSRF) is a malicious attack that tricks the user's web browser, by issuing unauthorized commands, to perform undesired actions so that they appear as if an authorized user is performing them.
  4. Cross-Site Scripting Cross Site Scripting (XSS) is based on inserting a client-side script into a genuine website. This is possible due to poor application or website design, such as limited data validation in websites. Scripts are then executed on other hosts that access the same website.
  5. Directory Traversal Directory transversal is a type of web attack using HTTP in which the attacker escalates their privileges to climb to a parent directory, or higher-level directory, out of the original website directory. Transversal refers to crossing the boundary between the website directory and higher directories, referred to as root directories.
  6. Back Door Originally the term back door, also called a maintenance hook, referred to an access device used by a program developer to access the application during the development stage. If it was not removed, at the very least it would allow the programmer access to the application around the normal access controls. Another use for back doors is for hackers to access applications and databases. Usually delivered by Trojan malware, a Trojan payload installs a malware application that creates a back door, or access port, for the attacker.
  7. Reverse Engineering The act of decomposing an item to determine its construction and method of operation is reverse engineering. For instance, malware is often reverse engineered to determine its construction, components, and the effect of the components upon the attacked system.
  8. Adware Adware is a type of spyware that, while making an advertising statement or showing a banner, solicits clicks from the end user. When the user clicks the adware banner, a Trojan or virus is be downloaded immediately, infecting the user's machine. This could be as simple as a keylogger, or it might convert the machine into a bot. Adware may also be used to create a revenue stream for the creator through a pay-per-click program. In this type of exploit, each time a user clicks a digital ad, the advertiser must pay several cents to several dollars. This revenue may also be paid as commissions if the ad is placed on a user's website. In an adware exploitation campaign, tens of thousands of users may be directed to high-paying advertising as bogus clicks. The hacker is then paid sometimes thousands of dollars for their effort.
  9. Ransomware Ransomware is malware often delivered through a Trojan attack that disables a system and advises the user to pay ransom to release the system. The ransom might be paid simply by purchasing a software application purported to be a virus scanner. The individual must expose their credit card information to the attacker. Ransomware is much more serious when entire organizations and companies have been attacked and held for hundreds of thousands of dollars' worth of ransom to release their systems.
  10. Covert Channel Any means of communication other than the standard channel of communication is referred to as using a covert channel, such as, for instance, sending messages on a control channel of a device.
  11. Out-of-Band Transmission Out-of-band is transmitting a message or date by any means other than through a normal channel of communication. Out of band is normally used to describe a method of exchanging passwords by not sending them over the same channel as the encrypted message.
  12. Payload A payload is the harmful code contained within any malware.
  13. SQL Injection Structured Query Language (SQL) is a communication system used to access databases. With a SQL injection, an attacker inserts a SQL escape character, a combination of SQL characters, or part of the SQL script into a website form field. If the form field offers limited data validation, the insertion may return database information or an error code, which may be useful to the attacker.
  14. Virus A virus is malicious code software that requires an action to reproduce. Viruses usually attach themselves to executable programs and thereby reproduce and spread every time the executable is launched. There are a number of terms associated with viruses:
    1. Virus Payload The harmful component of a virus. Some payloads include devastating properties that can erase entire hard drives or permanently harm hardware equipment.
    2. Virus Hoax Typically email warnings concerning potential virus attacks. The spread of the email warnings actually creates a denial-of-service attack among many users. Virus hoax notifications should always be referred to help desk or IT departments for verification prior to distribution.
    3. Macro Virus A virus created through the use of macro programs usually found in Microsoft Office applications. Microsoft has since taken steps to request approval prior to executing macros in Microsoft Office Suite applications.
    4. Virus Signature A specific identifiable string of characters that characterizes it as a virus or family of viruses. Anti-malware software as well as intrusion prevention and detection systems attempt to identify viruses based upon specific signatures.
    5. File Infecting Virus A type of virus that specifically infects executable files to make them unusable or permanently damaged. By overwriting code or permanently inserting code, the virus changes the original files to perform differently.
    6. Boot Sector Virus This type of virus infects the storage device's master boot record. This was a popular attack in the days of floppy disks. All modern operating systems ­contain boot sector safeguards and anti-boot sector virus controls.
    7. Polymorphic Virus A polymorphic virus changes slightly as it replicates throughout the system. This makes it difficult for scanners to detect this type of virus because of different variations. This type of virus most often attacks data types and data functions used in many programming languages. The virus will usually manage to hide from your antivirus software. Very often a polymorphic virus will encrypt parts of itself to avoid detection. When the virus does this, it's referred to as a mutation virus.
    8. Stealth Virus A stealth virus masks itself as another type of program to avoid detection, usually by changing the filename extension or modifying the filename. The stealth virus may also attempt to avoid detection by antivirus applications. When the system utility or program runs, the virus may redirect commands around itself in order to avoid detection. Stealth viruses modify file information by reporting a file size different from what the actual file size presently is in order to avoid detection.
    9. Retrovirus A retrovirus directly attacks the antivirus program, potentially destroying the virus definition database file. The virus disables the antivirus program yet makes it appear as if it is working, thus providing a false sense of security.
    10. Multipartite Virus A multipartite virus attacks different parts of the host system, such as a boot sector, executable files, and application files. This type of virus will insert itself into so many places that, even if one instance of the virus is removed, many still remain.
    11. Armored Virus An armored virus is constructed in such a manner as to be highly resistant to removal by anti-malware software.
  1. Logic Bomb A logic bomb is a script or malware usually installed by a disgruntled employee or insider to cause harm based on a certain event occurring. For instance, if the employee is fired and does not reset the script, the script executes, causing some harm to the network, host, or data.
  1. File Extension Attack The Windows New Technology File System (NTFS) allows filenames to extend up to 235 characters. These extremely long filenames are usually abbreviated on directory displays and in other presentations, thus hiding the fact that there may be a double file extension or other hidden filenames.
  2. Double File Extension Attack The double file extension attack features two extensions within a filename, but only the final file extension is operative. The previous file extensions will appear to the system as part of the filename. For instance, …/…/.jpg.exe could be a double extension filename.
  3. VM Escape VM escape is the result of an attack upon a virtual machine whereby the attacker is successful in bouncing out of or escaping the virtual environment and controlling the hypervisor. Within the hypervisor, the attacker may then successfully control any other virtual machine as well as attack the underlying hardware infrastructure.
  4. Phishing Phishing is an attack that attempts to obtain personal information, credit card information, or login information by masquerading as a legitimate entity. Often the attack involves an email sent to an unsuspecting target, requesting confidential information. The email may appear legitimate and possibly direct the target to a fake website, which may appear to be identical to an authentic website.
  5. Spear Phishing Spear phishing is a directed attack on an individual or group of individuals with the goal of gathering personal or corporate information. For instance, the CEO of an organization that is engaged in a merger or acquisition process may be targeted by individuals wishing to sell valuable financial information to unscrupulous investors.
  6. Whaling Whaling is the targeting of senior executives within an organization, usually through officially appearing emails. Some of these emails may appear as legal documents or subpoenas and may appear to be generated by the U.S. Securities and Exchange Commission, the FBI, or a reputable law firm. In most cases, the senior executive is instructed to click a link in the email, which immediately infects the ­executive's PC.
  1. Vishing Vishing (voice phishing) is usually carried out by sending a fake email that instructs the target to call a specific phone number. The recording at the fake phone number usually prompts users to enter an account number or PIN, thus allowing the attacker to obtain personal information.
  2. Pharming Pharming is a type of social engineering attack to obtain access credentials, such as usernames and passwords. In practice, it's a type of attack that redirects the user to an unexpected website destination. Pharming can be conducted either by changing the hosts file on a victim's computer or by exploiting a vulnerability in DNS server software.
  1. Trojan Malware Trojan malware is malware that is disguised as a usable program. For instance, malware may be inserted and hidden inside a program such as Microsoft Paint. Once the user executes the Microsoft Paint application, Trojan malware is immediately inserted into the system. Quite often, downloadable software, such as hardware drivers, software upgrades, and even software claiming to be free malware and virus scanners include Trojan malware.
  2. Rogue Software This type of Trojan software is loaded by the user, either willingly or through other practices, and once installed, functions as ransomware. Functionality is typified by frequent pop-ups, changing of desktop or application appearance, or difficult-to-remove screens.
  3. In the Wild In the wild refers to anti-malware that has been released onto the Internet. Imagine that this malware is roaming free and is being exchanged through unsuspecting host relationships, indiscriminate clicking email links, and other types of actions that spread the malware through the Internet.
  4. Air Gap Air gap is a networking term that describes how an internal network can be totally isolated from the outside world. With no connections in or out of it, the network is said to be air gaped, meaning that there is a complete isolation zone around the network perimeter.
  1. Zero-Day Attack A zero-day attack is a type of attack in which the attacker uses a previously unknown attack technique or exploits a previously unknown vulnerability. The zero-day attack usually exploits a vulnerability but also exploits the time differential between the discovery of the zero-day attack and the time the manufacturer or developer issues a patch to correct the vulnerability. Many zero-day attacks are unreported and thus remain unpatched.

Managing Spam to Avoid Malware

Spam is the receipt of unwanted or unsolicited emails. It's not truly a virus or a hoax. It's one of the most annoying things that users encounter. Spam may also create larger problems for the user and a network administrator. Spam might contain links that automatically download malware on a mouse click, or it could contain a link that takes the user to an infected website.

There are many anti-spam applications available, and they can be run by network administrators as well as by end users. As with any filter mechanism, false positives and false negatives are always possible. On occasion, a spam filter will block important messages.

Although spam is a popular term to describe unwanted emails, similar terms have evolved to describe unwanted messages in other mediums. For instance, SPIM (Spam over Instant Messaging) and SPIT (Spam over Internet Telephony) describe specific unwanted messages.

Cookies and Attachments

To provide a customized web experience for each visit, a website may insert a cookie in a file on the user's browser. This text file typically contains information about the user, such as client's browsing history, browsing preferences, purchasing preferences, and other personal information. For instance, Amazon.com utilizes cookies placed on a user's computer to record user preferences and browsing habits. When the user returns to Amazon.com, the site reads the cookies and dynamically generates web pages illustrating products that may be of interest to the user.

Cookies are considered a risk because they have the ability to contain personal information about the user that might be exploited by an attacker. Passwords, account numbers, PINs, and other information may be inserted into a cookie.

Although cookies can be turned off or not accepted by a browser, websites have an ability to write a type of cookie that creates persistent data on a user's computer. An Evercookie, created by Samy Kamkar, stores cookie data in several locations the website client can access. Should cookies be cleared by the end user, the data can still be recovered and reused by the website client. Evercookies are written in JavaScript code and are intentionally difficult to delete. They actively “resist” deletion by copying themselves in different forms on the user's machine and resurrect themselves if they notice that some of the copies are missing or expired.

Similar to user preferences recorded in cookies, a device fingerprint (sometimes referred to as a machine fingerprint or browser fingerprint) is a single string of information collected from a remote computing device for the purpose of identification. With the use of client-side scripting languages such as JavaScript, the collection of device parameters is possible. Normally, a script is run on the client machine by a legitimate website or by an attacker wishing to enumerate the device.

If security is of utmost concern, the best protection is to not allow cookies to be accepted. Cookies may be enabled or disabled using any browser options menu. Most browsers allow you to accept or reject all cookies or only those from a specified server. Figure 9.2 illustrates a typical cookie. This screen shows that there are 12 cookies from Google.com. Notice that the selected cookie has various metadata as well as a content field. Applications may read or rewrite any of the information as required. Every browser allows the user to access the cookie file. Notice the X to the right of the cookie. Clicking this allows you to erase the cookie. All cookies can be bulk erased by clicking the Remove All button at the top.

Image described by caption and surrounding text.

Figure 9.2 The APISID cookie from Google.com

Most of the time the word attachment refers to the file attached to an email document. Attachments may also be included in instant messages (referred to as IMs), text messages, and other types of communication. Attachments offer challenges for the security professional because they can harbor malicious code.

When someone is sending a message with an attachment, their email client marks the attachment with two different labels. These include the data type known as the content type or MIME type and the original filename. The labels instruct the receiver's email client as to what application to use to open the attachment. For instance, graphics data would be displayed as a photograph or picture, an audio or video file would be played in the appropriate media player, and a document such as a Microsoft Word document would be shown as a link that must be clicked to open the document.

Attachments can be dangerous. They can contain an executable program. The executable is interpreted directly by the computer and can possibly install a virus, covertly transfer data to a remote host computer, or even destroy host data entirely. However, executables must be written for a specific type of computer. For instance, an executable written for the Linux operating system will not operate on Windows-based computers. Another risk is an attachment containing a script. A script is an executable file that may be downloaded and may be executed directly by the browser or host computer. Scripts usually are interpreted by other programs such as Adobe Flash Player and Internet Explorer. An example of a malicious scripting virus is the Melissa virus. It was a Visual Basic script that infected the host computer and then mailed itself to the contacts in the host's email address book.

Not all attachments contain dangerous scripts or malicious software. Some attachments contain audio files, media files, static picture files, or other files that might be malformed and, when interpreted by the email client, cause the client or the entire host system to crash.

The following methods can be used to protect against threats contained in attachments:

  • Keep all software up-to-date with current versions and patches.
  • Create a white list of approved or safe email sources.
  • Set an antivirus program to scan all attachments.
  • If a link is present, save the contents in a file rather than clicking the link directly.

Malicious Code Countermeasures

Numerous techniques are available to detect malicious code and to take corrective and recovery actions upon discovery. These techniques involve software applications as well as hardware appliances that may be used as controls to mitigate the risk of malicious code and malware affecting a network or network node device.

  1. Anti-malware and Anti-spyware Anti-malware and anti-spyware software must be installed on every network node, including host computers, mail servers, file servers, and detection and prevention devices. The software must be updated regularly and enabled to automatically receive the latest virus and spyware definitions. Anti-malware software should be tested regularly to ensure that a retrovirus has not corrected the application or deleted signature library files. The application should be set to automatically scan the following:
    1. Scheduled Scan All anti-malware and anti-spyware software can be scheduled to automatically scan at a certain time, and in many cases the default is in the middle of the night. Care must be taken to ensure that the scans are properly accomplished. In some cases, the PC is turned off during the scheduled scan time or the scan will not occur. It is advisable to reference log files of the anti-malware software to ensure that scheduled scans were accomplished as expected.
    2. Real-Time Scan Most anti-malware and anti-spyware software can be enabled to scan files as they are opened, emails as they are received, and devices such as USB drives as they are attached. The downside is that real-time scan can add latency when software is opened and files are loaded.
    3. On-Demand Scan All anti-malware and anti-spyware software allow you to perform a file scan at any time. It is standard for these applications to have both an extensive scan, which may require several hours to complete, and an expedited or targeted scan, which scans either a set group of previously selected files or files that have recently changed.
  2. Protocol Analyzers The terms protocol analyzer and packet sniffer generally refer to the same technique or software application used to intercept packets flowing along the network. Data that is transmitted across the network may be intercepted by a personal computer with a network interface card set in promiscuous mode. Normally, network interface cards listen for only the traffic destined to them. Promiscuous mode allows the software to receive and monitor all traffic. Protocol analyzers may be simply attached to a network using a network tap. This is simply a three-way splitter that routes the traffic to the network interface card. Protocol analyzers can ­display real-time network traffic and feature various filtering capabilities to better ­visualize the vast amounts of traffic being received. These devices usually keep all of the created recordings or log files of network traffic analysis for a later time. Some well-known protocol analyzers are Wireshark, SAINT, SATIN, and Snort. Figure 9.3 depicts a typical Wireshark packet capture. This capture collected 23 packets of a SSL handshake.
    Snippet image of the Wireshark protocol analyzer screen displaying a packet capture which collected 23 packets of a SSL handshake.

    Figure 9.3 A Wireshark packet capture

  1. Vulnerability Scanners Vulnerability scanners provide the ability to scan a network and search for weaknesses that may be exploited by an attacker. The vulnerability scanner software application looks for weaknesses in networks, computers, or even software applications. Vulnerability scanners can include port scanners and ­network enumerators, which conduct a series of tests on a target and search against an extensive list of known vulnerabilities. Some vulnerability scanning applications are listed here:
    1. Nmap Nmap is a software application for probing computer networks, providing detection and discovery of hosts and services running on ports, and determining operating systems. In operation, Nmap sends special packets to target nodes and analyzes the response.
    2. Nessus Nessus is a popular vulnerability scanner that checks for misconfigurations, default passwords, and the possibility of hidden denials of service. In operation, Nessus determines which ports are open on the target and then tries various exploits on the open ports.
    3. Retina Retina is a commercially available network security scanner that provides advanced vulnerability scanning across the network, the Web, and virtual and database environments. Used to continually monitor the network environment, it may be used to detect vulnerabilities on a real-time basis and recommend remediation based on risk analysis of critical assets.
    4. Microsoft Baseline Security Analyzer The Microsoft Baseline Security Analyzer (MBSA) is a tool that can scan a system and find missing updates and security misconfigurations. It can be used to determine the security state of a PC in accordance with Microsoft security recommendations and offers specific remediation guidance. It can be used to scan one or more computers at the same time and can use a computer's name or IP address to schedule a scan. Figure 9.4 illustrates a typical scan using the MBSA. The scan was enabled to scan only one machine and to rank the problems found from top-down.
      Image described by caption.

      Figure 9.4 A Microsoft Baseline Security Analyzer scan showing several problems that were found

  1. Dynamic Threat Analysis Appliance Threat analysis appliances have recently come on the market that dynamically detect malware and are described as an anti-malware protection system. A threat analysis appliance is used to monitor and protect network, email, endpoint, mobile, and content assets. A major benefit is the ability of the threat analysis appliance. To dynamically monitor the environment for recently changed malware signatures or previously unknown malware attacks referred to as zero-day exploits. Upon finding a zero day, exploit, or changed malware signature, the client may send information to the manufacturer's investigation laboratory. A sandbox is usually tested and studied to determine the harm that it could cause. This information is then shared with a global network of similar devices, thus immediately protecting those environments from the recently discovered attack.
  2. Intrusion Prevention Systems Intrusion prevention systems are network appliances that monitor networks at various locations for malicious activity. They are placed in line and are able to execute rule-based actions based on attack detection. Such actions may include dropping packets, changing firewall rules, alerting operators, resetting connections, and blocking activity.

Malicious Add-Ons

Add-ons are small applications that are downloaded to the client computer during a web session. Because these are executable programs, they have the potential of doing harm to the client computer. Some of these apps, add-ons, or included scripts do harm unintentionally due to poor programming or poor design, while others are malicious add-ons that are designed with the intent of doing harm to the client computer. The following sections describe two general types of web-based add-ons: Java applets and ActiveX controls.

Java Applets

A Java applet is a small, self-contained JavaScript program that is downloaded from the server to a client and then runs in the browser of the client computer. For a Java applet to execute, Java must be enabled on the client browser.

Java applets are small programs that control certain aspects of the client web page environment. They are used extensively in web servers today, and they're becoming one of the most popular tools for website development. They greatly enhance the user experience by controlling various kinds of functionality and visuals presented to the end user. Java-enabled websites running Java web pages accept preprogrammed instructions in the form of JavaScript as they are downloaded from the server and control the actions and presentation in the user environment.

The Java applets must be able to run in a virtual machine on the client browser. By design, Java applets run in a restricted area of memory called a Java sandbox, or sandbox for short. A sandbox is set up anytime it is desired to restrict an application from accessing user areas or system resources. A Java applet that runs in a sandbox is considered safe, meaning it will not attempt to gain access the system resources or interfere with other applications. Potential attacks or programming errors in a Java virtual machine might allow an applet to run outside the sandbox. In this case, the applet is deemed unsafe and may perform malicious operations. The creators of Java applets understood the potential harm that running unchecked scripts on a client computer could cause, which is why they required the Java applets to always execute the Java sandbox.

ActiveX

ActiveX is a technology that was implemented by Microsoft to customize controls, icons, and other features to increases the usability of web-enabled systems. An ActiveX control is similar in operation to a Java applet. ActiveX controls have full access to the Windows operating system unlike Java applets. This gives ActiveX controls much more power than Java applets because they don't have the restriction or requirement of running in a sandbox. ActiveX runs on the client web browser and utilizes an author validation technology using Authenticode certificates. The Authenticode certificates are used by the client as a type of proprietary code-signing methodology. Web browsers are usually configured so that they require confirmation by the user prior to downloading an ActiveX control. Security professionals must be careful to fully train end users in allowing the use of ActiveX on their personal computers. Authenticode certificates that are invalid will provide a pop-up message for the user providing the options as accept and continue or reject the ActiveX plug-in. By choosing the accept option, the installation of malware or other harmful actions may be taken by an ActiveX plug-in.

User Threats and Endpoint Device Security

An organization's endpoint devices, such as desktop workstations, printers, scanners, and servers, as well as personally owned devices such as tablets and laptops, must be configured and used in a secure manner for several reasons. First, it is very likely that sensitive, confidential, or even proprietary information is stored or processed on the workstation. Regulatory agencies or contractual relationships may expose the organization to liability and possibly substantial fines if information is not protected using generally accepted protection methods. The concept of due care dictates that appropriate controls are put into place to adequately protect information and ensure that it is not improperly disclosed. Second, system integrity is critical. End-user workstations must operate as expected, be available for use, utilize applications that are not compromised, and provide processed data that is complete and accurate. If a user's workstation has been compromised, lost, or altered, data may lead to wrong decisions. Loss of workstation availability will most certainly lead to lost productivity.

The security practitioner is absolutely essential in ensuring that the organization's end-user workstations meet minimum standards for connecting to the network as well as incorporating the health and hygiene to provide reliable computing services.

Since the security practitioner is the first line of defense, it is quite common for them to be involved in end-user training and communicating best practices relating to a variety of site work tasks, including protecting passwords, serving websites, and opening email attachments. Users must be made aware of the hazards of downloading software, clicking on phishing links, and performing tasks and activities that place the user workstation as well as the network in jeopardy.

General Workstation Security

Workstation security may be mandated through IT security policy. Typically, such a policy encompasses other policies, such as workstation policies. Workstation policies may include the activities that are carried out on a workstation, such as file sharing, making backups, managing software patch updates, and other general activities that involve the management of an organization's workstations.

Passwords

Workstation and application passwords should be changed routinely. A standard within the IT industry specifies an interval for password changes as 60 to 90 days. A number of organizations use a multi-tier password policy that involves changing passwords more frequently for sensitive or confidential information. In some instances, the changing of passwords may be more frequent based upon job roles such as power users or system administrators. General password policy administration should include the following:

  1. Utilize the system key utility. The system key utility (Syskey) provides an extra defense within a Windows-based system against password-cracking software. User account password information is stored in the Security Accounts Manager (SAM) database of the Registry on workstations in a Microsoft Windows environment. Often, the attackers target the SAM database or Active Directory services with password-cracking software to access user account passwords. The system key utility makes use of strong encryption techniques that make cracking encrypted account passwords more difficult and time-consuming than cracking non-encrypted account passwords. There are three levels of security offered by the system key utility depending upon how or where the system key is stored.
  2. Password best practices. There are numerous articles on the Internet outlining best practices for passwords. Any or all of these practices should be incorporated into security awareness courses for end users. Here are some suggestions:
    • Create unique passwords using combinations of words, numbers, and symbols with both upper- and lowercase letters.
    • Do not use easily guessed passwords such as your first name, password, 12345678, or user.
    • Do not use personal information as a password such as Social Security number, birth date, spouse's name, street name, names of family members, or other easily identifiable information.
    • Do not use personal names or any word that may appear in a dictionary. Password cracking tools are available for English and most foreign languages and can easily test tens of thousands of words against your password.
    • Using password phrases such as IgraduatedfromtheUniversityofTexasin1977 will make it difficult for an attacker to use a brute-force cracking method. Complexity and length are the key strengths of a password phrase.
    • Of course, remind users that they should not write passwords on sticky notes.

    Figure 9.5 is a typical Microsoft warning screen advising users of a password change policy. The wording on these types of screens may easily be changed to fit the policy.

  3. Image described by caption and surrounding text.

    Figure 9.5 A typical password change policy advisory pop-up

Password Lockout Policy

A password attack known as a brute-force attack can be automated to attempt thousands or even millions of password combinations. Account lockout is a feature of password security Microsoft has built into Windows 2000 and later versions of the operating system. The feature disables a user account after a preset number of wrong passwords. The number of failed login attempts as well as the period of time to wait before logins may resume may be set by the administrator. The three policy settings are as follows:

  1. Account Lockout Threshold The number of failed logins after which the account becomes locked out. The range is 1 to 999 login attempts.
  2. Account Lockout Duration The number of minutes a locked-out account remains locked out. The range is 1 to 99,999 minutes.
  3. Reset Account Lockout Counter After The number of minutes it takes after a preset number of failed logon attempts before the counter tracking available logons is reset to zero. The range is 1 to 99,999 minutes.

The account lockout feature is enabled through the Group Policy Object Editor and uses the relevant Group Policy settings. There are two other special settings:

  1. Account Lockout Duration = 0 Once an account is locked out, an administrator is required to unlock it.
  2. Account Lockout Threshold = 0 This setting means that the account will never be locked out no matter how many failed logon attempt occur.

Malware Protection

Malware software is a type of computer software used to prevent, detect, and remove malicious software. Originally named antivirus software because it was developed to control and remove computer viruses, it now has much broader use, providing protection from Trojan horses, worms, adware, and spyware, among many other kinds of malicious software.

The backbone of endpoint security rests upon the prevention, discovery, and remediation of the effects of malware. Every endpoint on the network should have anti-malware protection software installed. Software should be maintained automatically because updates for new viruses are generally made available every week. Anti-malware software should be configured to automatically scan emails, transferred files, and any type of portable USB drive (such as thumb drive or memory stick), CD/DVDs, and software on any other media. Anti-malware should be set to automatically scan the endpoint at least once a week if not more frequently.

Anti-malware software makes use of a number of different methods to identify unwanted software:

  1. Signature-Based Detection Signatures are patterns of known malware. By comparing signatures to software within a workstation, malware software can identify potentially harmful software.
  2. Behavioral-Based Detection A detection mechanism that recognizes various software behaviors and matches them to a library of expected behaviors of known harmful software.
  3. Heuristic-Based Detection A learning and statistical assumption technique used in making very fast decisions with relatively little information.

The implementation of an antivirus/anti-malware software solution should include the following:

  • Scheduled daily or weekly antivirus/anti-malware signature updates.
  • Scheduled periodic scans of all files and file types maintained in any storage location.
  • Real-time email and attached storage device scanning enabled.
  • Real-time Internet page scans with automatic script blocking enabled.
  • If malware is found, clean the threat first and then quarantine it.
  • Antivirus/anti-malware software must be initiated upon system startup.
  • Antivirus/anti-malware must be protected from unauthorized configuration changes or circumvention.

Through an organizational IT security policy, antivirus/anti-malware software must be up and running on all network endpoints, including personal computers, file servers, mail servers, and network attached personal devices.

Backups

Most workstations store files on a local hard drive. If a workstation stores files locally that contain critical or sensitive information, those files should be transferred to a server for storage and scheduled backup. The level and frequency of backup required depends on the criticality of the data. This data should be considered in the IT backup policy. To ensure adequate backups of files, they should be copied to a secure server location that is backed up on a regular schedule. Some information within a specific security category or risk level may be regulated by IT security policy and should never be written to local hard drives.

Normally, workstations can be created by using a standard workstation image. But over time, additional software applications and data are added to the system drives. Users should be made aware of techniques to back up applications and data to a central source in the event a workstation should ever be required to be reimaged. The imaging, of course, erases all user data.

If workstations are ever backed up to portable drives, optical DVRs, floppy disks, or any other portable media, security precautions should be considered for the protection of the backup media. Not only should the backup data be kept in a safe place, the backup data should be protected from possible theft and exfiltration.

Anonymous Access

Anonymous access control is an access control methodology in which the server or the workstation may ask for user identification (username) or authentication information (user password) but no verification is actually performed on the supplied data. In many cases, the user is asked for an email address to simply access the system.

An endpoint or workstation that provides a File Transfer Protocol (FTP) service may provide anonymous FTP access. Users typically log into the service with an anonymous account when prompted for a username. In some applications, such as updating software, downloading information, or sharing files between users, identification of the user is not required.

Anonymous access may also be provided to various wireless network subnets that host guest access, such as in lobbies of buildings. Persons joining this type of wireless network can usually access only a limited amount of information and are totally screened from the internal network.

It is suggested that you do not allow anonymous access of any kind. User authentication should always be used to protect file transfer for any foreign user access to either a workstation endpoint or a server. User access should always be monitored by a network logging mechanism.

Patches and Updates

Proper endpoint configuration is essential for the security of not only the system but the entire network. Poorly configured devices can cause more harm than a software defect. Every endpoint should be scanned to detect and eliminate inappropriate share permissions, unauthorized software, and excessive user privileges. Weak or missing passwords or services running with executive or administrative privileges may possibly expose the device to a variety of exploits.

Patch management is crucial for the security of the endpoint. Many organizations utilize a centralized patch management system. The system automates patch distribution and installation and logs and verifies that patches have been made and updates have been installed. Patches should always be tested on simulated production systems prior to installation in the network. Users should be advised to leave their workstations powered up on the evening that the patches will be pushed out to the workstations.

All mobile devices issued by the organization should have security patches applied in a timely manner. Users should be directed to make their systems available at a specific place and time so that updates and patches may be installed.

Provision should always be made to back up workstations as well as mobile devices prior to installation of any patches, updates, or system upgrades. And for updates and upgrades, the appropriate procedures should be followed as outlined in the organization's change management policy.

Physical Security

The physical security of valuable assets is always a concern to any organization. Theft of a laptop or PC is always top of mind at the mention of physical security, but theft of information from unprotected workstations and unauthorized access to network wiring cabinets are also possibilities with lax physical security.

Many organizations usually have personnel access layers of security. For example, public to restricted access layers include a public lobby area, restricted work areas, and highly restricted IT areas. Most buildings include restricted badge access or even more stringent policies featuring biometric authentication, security guards, CCTV cameras, and even mantraps.

End users can take various steps to protect the assets of a business, which include both the physical hardware and the data contained on it. During a security awareness class, users should be made aware of the following:

  • Power off workstations or place standby power when not in use.
  • Ensure that office doors are locked when not in use.
  • Employ a clean desk policy so no information remains on a desk when it is unattended. This includes locking cabinets and desk drawers.
  • Workstations should employ a wire cable and lock or other appropriate antitheft device. Devices should be securely locked to an immovable device. If a workstation or desk top employees a power button key lock, it should be used as additional desktop security.
  • Removable devices such as thumb drives, stick memory, micro cards, USB storage devices as well as portable disk drives such as network attached storage, and all types of media containing data should be properly locked and secured before a user exits the area.
  • Workstation should employ a password-protected screensaver thereby requiring a password to re-access it.
  • Workstations involving sensitive or restricted data should be set to log off after a brief period of inactivity.

Physical security should be a continuous concern for an organization and should become part of the organization's security culture.

Clean Screen/Clean Desktop

There are several aspects of both physical and logical security in relation to CRT and LCD workstation displays. Of course, the display is the eyes into the data, the application, and the entire network. Care should be taken with information that appears on a workstation display. A security practitioner can utilize the following when securing visual information:

  1. Screensaver A screensaver should be used every time a user is not in front of the workstation display. The screensaver should either be triggered by the user upon exit of the area or be triggered by a brief period of inactivity. The screensaver should automatically lock the computer, thereby requiring a user to enter a password to re-access the workstation.
  2. Automated Log Off Workstation settings should be established to automatically log off the user after a period of inactivity. This may be a brief period of time, such as 10 minutes, but usually not longer than 30 minutes. In the event highly restricted information is used on the workstation such PII, HIPAA, or classified information, the workstation should automatically log off immediately after a brief period of inactivity. Token workstation access requires a token such as a plastic identification card to be inserted directly into the workstation to allow workstation logon and access. In the event the individual leaves the immediate vicinity of the workstation and withdraws the card, the workstation immediately logs off.
  3. Automated Power-Down Such as Hibernate, Sleep, or Power Off Modes Workstation settings should be established to automatically place the workstation into power saving mode after a period of inactivity.
  4. Visual Security Filter A polarizing plastic filter is a visual security filter that may be placed over the screen to reduce the area of view to only the user sitting directly in front of the screen. This reduces the possibility of other individuals observing data or sensitive displayed information.
  5. Physical Security Positioning Display screens should be placed in an office or cubicle area that provides the least amount of exposure to undesired viewers. For instance, workstation displays should not face into the room so others can view the information. This is sometimes difficult with personnel working in cubes, but, with the use of polarizing security filters some security may be achieved.

A clean screen/clean desktop policy is a security concept that limits or controls information shown on the display screen. Clean desktop is the display of a minimum number of desktop icons. When the number of icons displayed on the screen is limited, prying eyes are not able to view folder names, document names, or other content displayed on the workstation desktop. A simple layer of defense is created in that an unauthorized individual gaining access to the system would not be able to easily click the desktop icon to gain access to valuable information.

The clean screen concept is a security technique that can be used to immediately blank out a monitor display screen or display dummy information. This is used to immediately hide anything displayed on the screen. There are a number of techniques used to accomplish this that are described on the Internet. One technique is that you can just press the hotkey combination Win+D to immediately return to the desktop. A fake background may be applied to the desktop to appear as an actual working document. To accomplish this, take a screenshot of an application page such as a spreadsheet or word-processing document. Save it as a desktop background. Hide all of the icons on the desktop. Once you use the hotkey to switch, the fake document will appear.

Email Security

Malware might easily enter any user's workstation with a simple mouse click in an email. As a security practitioner, it may become your responsibility to make your organization's users aware of the following secure email practices and the downside of various email schemes:

  1. Do not open unsolicited attachments. If the attachment is not from a trusted source, do not click it. Some emails are forwarded after being hijacked from the email address list of a friend and may have a subject line such as “You'll love this cat.” In fact, filenames may be spoofed and a GIF photo file may actually be an EXE file in disguise. Once you click, it automatically loads and executes.
  2. Set your malware scanner to automatically scan emails. This may take just a little extra time, but an email could contain a keylogger.
  3. Avoid checking emails on public Wi-Fi systems. With any free Internet hotspot lurks the possibility of a nefarious individual with a rogue sniffer. An attacker utilizing a rogue hotspot can appear to be a legitimate hotspot location. Utilizing a rogue hotspot this person may intercept Wi-Fi traffic in the form of passwords, bank accounts, logins, and other personal information and use it again at a later time after you're long gone from the location.
  4. Use separate email accounts. With separate accounts, not only can you kill one that is becoming saturated with spam, you can use different types of accounts for different types of email. For instance, you might have accounts set up for work email, social email, and family email.
  5. Use strong email account passwords. It is quite common for individuals to make use of the same password for all of their accounts, not only email accounts but sometimes bank accounts and credit card accounts as well as Amazon and eBay. If a hacker compromised the email account password, they might easily extend the use of the password on other accounts that they intercept.
  6. Be aware of phishing scams. This type of scam may impersonate known websites such as eBay, PayPal, or your favorite department store. The site may look totally legitimate and email may state that account information needs to be confirmed or records need to be updated and so your username, password, and other personal information is required.
  7. Never click an email link. Any link in an email, no matter how innocent it looks, could lead directly to an executable that may be immediately downloaded and infect your system.

Of course, the IT department administration of email can be enforced by IT policy, software settings, and other security controls. Spam filters, data loss protection devices, as well as other protective controls can be put in place to mitigate the problems with endpoint email.

Host PC Firewall

Firewall software should be engaged on all endpoint devices. Windows and Mac OS currently include Host-based firewall protection. Firewalls can be set to specifically block all unused ports on the endpoint device as well as to allow white-list inbound traffic.

A host-based firewall is a piece of software that runs on the endpoint device and restricts incoming and outgoing information for that one endpoint. While network firewalls have much more detailed rule sets, a host-based firewall might simply list items to allow or deny.

Many current antivirus software applications contain a built-in firewall as an added benefit. It is important to read instructions carefully because the operating system firewall, as well as a malware application firewall, might conflict and cause a self-inflicted denial of service. Some firewalls contained in an antivirus software application may suppress website pop-up windows, filter emails, scan USB devices, and provide a variety of other attractive options.

Encryption Full Disk, File, Server, Removable Storage

It is always recommended that sensitive or confidential data not be stored on a local workstation or endpoint but be retrieved from a secure server as required. In the event that personally identifiable information (PII), HIPAA data, financial information, or other restricted data must be saved to the user's local hard disk, it should be heavily encrypted. We have all heard on news programs that laptops have been stolen, exposing tens of thousands of accounts with information that was stored locally on the laptop hard drive. In all cases, sensitive information must be encrypted when on a personal computer or personal storage device such as a USB drive or an external hard drive.

All versions of the Microsoft Windows operating system developed for business use feature the Encrypting File System (EFS). The default setting for Windows operating systems is that no files are encrypted. ESF allows users to apply encryption to individual files, directories, or the entire drive. It may be invoked through the use of Group Policy in Windows domain environments. EFS is available only on hard drives and USB drives formatted with Microsoft New Technology Filesystem (NTFS) file format.

Consideration should be given to exactly how to manage encrypted data stored in files and folders on an endpoint system. In most instances, files inherit the security properties of the folder in which they reside. Care should be taken when processing data from encrypted files that the resulting processed data is not placed in a non-encrypted folder, thereby declassifying or unencrypting the information.

Users should never encrypt individual files but should always encrypt folders. Encrypting files consistently at the folder level ensures that files are not unexpectedly decrypted by an application. A best practice is to encrypt the My Documents folder on all endpoint devices so the user's personal folder, where most documents are stored, is encrypted by default.

Commercial software encryption products such as the open-source TruCrypt, Microsofts' BitLocker, or the commercial product Check Point PointSec can be used to encrypt sensitive or protected data.

Trusted Platform Module

A Trusted Platform Module (TPM) is dedicated microprocessor that is mounted on a device's main circuit board and serves as a cryptoprocessor. The TPM offers many advantages, such as offloading cryptographic processing of the main CPU to a dedicated microprocessor. TPMs offer many services, such as providing a random number generator and the generation and storage of cryptographic keys. Beginning in 2006, most laptop computers targeted at the business market have been produced with a TPM chip. Use of a Trusted Platform Module has spread to other devices such as cellular phones, personal digital assistants, and even dedicated gaming devices.

The Trusted Computing Group (TGC), an international consortium of approximately 120 companies, has written the original TPM specification and maintains revisions and updates to the standard. The current revision is TPM 2.0, which was released in the fall of 2014.

Trusted Platform modules provide numerous services, some of which are listed here:

  1. Trusted Boot Protection Provides system integrity during system boot by storing specific system metrics such as operating system hash values to detect changes to a system from previous configurations such as a the installation of a root kit malware. This ensures platform integrity.
  2. Encryption Key Storage Full disk encryption software applications use the TPM technology to store and protect keys that may be used to encrypt the host hard disks.
  3. Password Protection Users authenticate by presenting a password to access encrypted data or systems. The password is quite often used to generate or access keys used in the encryption process. The TPM offers an authentication mechanism that is implemented on the hardware rather than the system software. Software key encryption is prone to dictionary attacks. A hardware implementation offers dictionary attack prevention.
  4. Device Identification All TPMs feature an identification key that is burned in during the manufacturing process. This uniquely identifies the device, which can serve as the device integrity as well as authentication mechanism.

All server, desktop, laptop, thin client, tablet, smartphone, personal digital assistant, and mobile phone devices procured for use to support United States Department of Defense (DoD) applications and functions must include a Trusted Platform Module version 1.2 or higher (when available). The intended use for the device is for endpoint identification, authentication, encryption, monitoring and measurement, and device integrity.

Endpoint Data Sanitization

Local, non-network storage devices, such as USB flash drives, SD cards, hard disk drives, CDs, and other personal storage devices, are commonly used to store data within an organization. Users of these products are sometimes unaware of the need to ensure the privacy of the information stored on these devices.

Endpoint data sanitization is the process of taking a deliberate action to permanently remove or destroy the data stored on a storage device. A sanitized device has no usable residual data, and even advanced forensic tools should not ever be able recover the erased data. Sanitization makes use of two methods of data destruction:

  1. Software Utilities Utilities are available that overwrite the storage medium numerous times in an effort to totally erase the data, leaving no remnant or residual data on the device.
  2. Hardware Destruction You can physically destroy a device so the data cannot be recovered.

Users who routinely work with highly sensitive data or regulated data, such as HIPAA-protected data, credit card information, financial information, or other personally identifiable information, should be provided with a method to dispose of or least permanently erase storage devices. Many organizations provide users with a secure drop box into which they place printed materials, drives, CDs, and other media that contains sensitive information. The boxes are collected by a certified document destruction company and shredded to meet various specifications. Over the past several years, the National Institute for Standards and Technology's (NIST) Special Publication 800-88 revision 1, “Guidelines for Media Sanitization,” has become the reference for data erasure compliance.

Application Security

Many software applications are available for distribution on the Internet, both freeware and commercially available software. Any software obtained through downloading or via DVDs or CDs, from any source, must be considered copyright protected. Even freeware and shareware often require a contractual agreement, which is mandated with commercial software and referred to as an End-User License Agreement (EULA). Unauthorized use of copyrighted software may expose an organization and user to substantial monetary fines. An Authorized Use Policy (AUP) banner should advise end users of the consequences of indiscriminately downloading or installing software on a workstation. Endpoint devices such as end-user workstations should be regularly scanned for unauthorized installed software. Downloading unauthorized software may expose the endpoint device and provide vulnerabilities that attackers can take advantage of. Users should be made aware during security awareness classes that the Internet can be extremely dangerous when it comes to downloading software products; many products seem harmless but contain viruses that can wreak havoc on computers. Some of the most offensive sites are those offering hardware drivers for downloading. These sites appear to be legitimate representatives of the manufacturer and seem to offer authorized drivers and updates for manufacturers' hardware products. The design of these sites makes it easy for the unaware user to become confused and click links that immediately infect a workstation.

Reputable manufacturers will offer a method of authenticating software applications as well as approved patches and upgrades offered for download by verifying the authenticity and integrity of the software files.

Code signing is a method through which the author is authenticated and confirmed as the original provider of the software. Code signing is accomplished by the software author using a cryptographic hash algorithm to process the software code to obtain a hash value or message digest. The hash value is then published on the author's website along with the name of the cryptographic hash algorithm that was used to produce a hash value.

The author then encrypts the hash value with their asymmetric private encryption key and includes the encrypted hash value with the software download. The encryption of the hash value is referred to as a digital signature. Through the use of public key infrastructure (PKI) and the application of digital certificates, it may be proven that the only person capable of signing the code was the owner of the private encryption key.

Once the software is downloaded, the user decrypts the hash value using the author's public key, which is found in the author's digital certificate, and then hashes the software code using the same hashing algorithm to obtain a second hash value. The second hash value is compared to the first hash value, and if they match, the user can be assured of the integrity and authenticity of the software code. Although this sounds like laborious work, it can be automated through most browsers.

Centralized Application Management

Centralized application management (CAM) is becoming popular with the advent of private and commercial cloud systems. Virtual desktop is a method of remotely implementing and administering operating systems as well as applications on a server rather than on the local endpoint device. When this type of client/server model is used, the data is not stored locally, and no applications are executing locally. All end-user keystrokes and mouse clicks are transmitted to the server, and the generated visual display is sent to the endpoint device. By incorporating client software that manages both the virtual desktop session and virtual private network communications onto a USB drive, you are not limiting end users to the location of the endpoint. They simply insert the USB drive into a workstation to log on and create their virtual desktop. The user is no longer constrained by the platform or device they wish to use. Using a virtual desktop data and applications may be made available across any device.

Cloud-based Software as a Service (SaaS) implementations will enhance the adoption rate of subscription-based software models. Subscription-based software models allow for the continuous update and maintenance of application software without the requirement to distribute patches and upgrades to the end-user endpoints. This provides an always up-to-date software application, which may be accessed at anytime and anywhere by the end user.

There are hundreds of cloud-based applications. Microsoft Office 365 is an implementation of a popular business productivity online suite using a cloud-based subscription model. Coupled with the unlimited Microsoft OneDrive cloud storage, users are no longer required to have a software application or data stored locally on an endpoint device.

Stolen Devices

Lost or stolen devices possibly containing corporate data create a huge risk for an organization. Not only can the data on the device be compromised, but user passwords as well as confidential information might all be stolen and possibly sold. As end users use their personal mobile devices such as tablets, smartphones, and other devices for personal and work-based data storage, the harm to the organization should the device be stolen or lost is greatly increased.

The IT information security policy of an organization must address the use of both organization-issued devices and personal devices used for business. Here are some recommendations:

  1. User Training User awareness training must accentuate the risks and the policies and procedures involved in placing organizational information on a privately owned device.
  2. Wiping Process Stolen or lost personally owned devices should be subject to full or partial wiping to destroy the data contained on the device.
  3. Device Recovery All personal devices used for business purposes should have tracking and locating applications installed.
  4. Device Registration All personal devices used for business purposes should be registered with the IT department.
  5. Device Seizure Corporate policies may dictate, along with a signed acknowledgment of the individual, that any privately owned device used for business purposes must be turned over to the organization upon request without the requirement for a court order or search warrant.

Along with the recommendations just cited, additional recommendations for ­organization-issued devices should be taken into consideration:

  1. Personal Information Personally identifiable information should not be maintained on an organization-issued device.
  2. Dedicated Travel Device Traveling executives or company personnel may be issued travel devices created with a clean device image. Personnel may load only the data required for travel on the device. The device is returned to the organization upon conclusion of travel and is wiped and reimaged. This is especially true with any travel to foreign countries.

All portable devices should invoke a locking screen requiring a PIN or fingerprint after a period of inactivity.

It is highly recommended that in the event of a device being stolen or lost, the authorities within the jurisdiction of the suspected theft be notified as soon as possible. Although device locator applications are available for most devices that pinpoint the current location of the device, it is highly recommended by most police organizations to not pursue the stolen item personally but to involve the local officials. This eliminates the likelihood of personal harm during a potential confrontation situation.

Thieves specifically target executive phones or portable devices for sensitive data and restricted information. Sensitive data, especially involving a new product release, merger, and acquisition information, and other organizational financial or proprietary information, may have a sizable street monetary value. Traveling executives should be made aware, through security awareness briefings, that they may be specifically targeted for device theft and to take special precautionary and protective measures.

Workstation Hardening

Endpoints can be hardened by deleting all of the nonrequired services and applications. All network services that are not needed or intended for use should be turned off. While all vulnerable applications should be patched or upgraded, other workstation hardening techniques include eliminating applications or processes that cause unnecessary security risks or are not used. File permissions, shares, passwords, and access rights should be reviewed.

Host Intrusion Detection and Prevention System

A host-based intrusion detection/prevention system (HIDPS) is software that runs specifically on a host endpoint system. Operating similarly to a network-based NIPS, the software monitors activities internal to the endpoint system. Activities such as examining log files, monitoring program changes, checking data file checksums, and monitoring network connections are among several tasks managed by a HIDPS. Data integrity, logon activities, usage of data files, and any changes to system files can be monitored on a continuous basis.

Host-based intrusion detection prevention systems operate similarly to anti-malware software in that they use signature-based detection (rule based) and statistical anomaly–based detection as well as stateful protocol analysis detection to monitor activities within an endpoint. HIDPS software, similar to the network-based NIDPS, may take predetermined actions based upon the discovery of suspicious activity. This may involve terminating services, closing ports, and initiating log files.

Securing Mobile Devices and Mobile Device Management

Mobile devices should be protected at all times. It requires comprehensive measures regarding its physical security while also protecting all electronic data residing on it. Several major data breaches have been caused by simply leaving a laptop exposed in a parked car while running into a store.

Mobile Device Management

Mobile Device Management (MDM) is a corporate initiative that manages the growing use of Bring Your Own Device (BYOD) policies in the workplace. It addresses both the requirement of the organization for network security and the protection of corporate information as well as recognizing the desire for the organization's members to use their personal devices in the workplace. Although seen by some as highly restrictive and intrusive, MDM policies prevent the loss of control and impact to organizational assets through data leaks and information exfiltration as well as establishing a baseline for device patching, updates, and OS hygiene.

IT policies involving the security of mobile devices should include steps that may be taken by both the IT department and the end user and include user awareness training concerning risks involved with mobile devices. Mobile device security strategy for organization may involve the following:

  1. Operating System Updates Maintaining operating system patches and updates will safeguard against vulnerabilities and security risks.
  2. Jail-Broken Phones Any device that has been jail broken should be restricted from accessing a corporate network. They expose the organization to security risks and vulnerabilities.
  3. Use of Strong Passwords The device should have a strong password, and if any applications offer security features or passwords, they should be invoked. Applications should have a different password than the device password.
  4. Encryption Technology Make use of an encryption technology available for the device.
  5. Wireless Encryption Devices should connect only to encrypted access points using WPA2.
  6. Bluetooth Bluetooth should always be disabled when not in use. Disable Bluetooth “discoverable” when not required.
  7. Email encryption TLS or PGP utilized as encryption for all email.
  8. Web Browser Security Enable web browser security.

Corporate Owned Personally Enabled (COPE) is a program whereby the organization owns and controls the device while the user may use the device for personal purposes as well as business activities. The IT department still maintains control over updates, upgrades, and software installation. This is ideal for network access control program that monitors the health and hygiene of devices connecting to the network.

Being almost the complete opposite of BYOB, the user is not bringing their device to the workplace but instead using a corporate-owned device for their purposes. To use this program, the organization purchases or subsidizes the purchase of a catalog selection of devices and makes them available to organization users. The organization usually participates in the monthly connection cost involving phone, data, and multimedia packages. Through this program, economies of scale can be realized by bulk purchasing devices for distribution while also negotiating connection packages with service providers.

COPE programs are not new. Many organizations have been issuing devices to employees and members for a number of years. COPE programs provide the organization with more oversight and policy enforcement capability, thus reducing some of the risk that comes with BYOD. It is now a viable economic means to take control of the BYOD environment.

Understand and Apply Cloud Security

If one had to sit back and wonder where the world of computers will be in five years, the answer will emphatically be two words: the cloud. The truth of the matter is that the industry is there now. Cloud computing, as it was originally called, has since been shortened to just “the cloud.” Without a doubt, there remain a fair number of old-school computer folks around who will jokingly point at a USB drive and proclaim to anyone listening, “That's my cloud right there.” Unfortunately, they certainly do not grasp the significance and the magnitude of what currently exists and what is yet to come.

The cloud has appeared at a point in time that is nearly a perfect storm in the world of network computing. There are two major forces at work that together are forcing the adoption of cloud technology on the world stage faster than almost any other computer technology has been adopted in the past. On one hand, corporate and organization networks are growing at unprecedented rates. Corporate data centers are struggling with limited budgets, physical space, time, and manpower, among other factors, just to keep up with end-user demand from corporate departments. Applications and corporate databases are requiring terabytes and petabytes of storage, while the numbers of users requesting access are seemingly growing exponentially, all of the time requiring larger and larger amounts of network resources. On the other hand, financial forces are at work. IT department budgets have continuously been strapped. The saying in those departments has been “do more with less,” while the C-level folks continuously struggle to provide greater returns to investors and shareholders. And, just when we need it, along comes a computing technology that addresses both needs.

The cloud offers a future that promises to address both the requirements of the IT department and the cash flow desires of corporate management. The IT department is thrilled because it no longer must expand and maintain massive amounts of corporate-owned networking resources. The cloud offers an ability to outsource all of the headaches involved with owning networking resources. The equipment no longer needs to be purchased, installed, configured, maintained, upgraded, and operated 24/7 by a small army of personal, all employed by the organization.

The cloud offers all of the resources required, including resources that can be virtually configured as well as expanded and collapsed on demand. No hardware ownership. The C-level folks are ecstatic about the arrangement.

The financial benefits of the cloud will make a substantial impact on the balance sheet as well as the bottom line of the organization. Simply put, purchasing servers, routers, switches, cooling equipment, and other hardware items are classified as a capital expenditure (or simply CapEx) for a business. For accounting purposes, a capital expenditure must be spread out over the useful life of the asset, and this is referred to as depreciation.

The outsourced cloud model is identical to that of a utility company for services such as telephone, electricity, and natural gas. The corporation pays on a regular basis, usually monthly, for the amount of service it uses. For financial purposes, this is referred to as an operational expense, or simply OpEx. And just as with any other utility, the organization pays for only the services that it uses at any given time. To add to the benefit, other expenses related to operating a large organization-owned data center will also be reduced.

So as you can see, the perfect storm is meeting the requirements from the IT department for additional flexible services that it can expand and contract on demand as well as satisfying the requirements from the finance department to switch from financing and depreciating an ever-increasing amount of fixed assets while converting the cost into a simple monthly payment out of operating expenses. Corporations and organizations large and small are rapidly adopting the cloud.

Cloud Concepts and Cloud Security

Many technology users today are quite familiar with storing data on individual public cloud systems. Google, Amazon, Microsoft, and Dropbox are just a few of the hundreds of service providers vying for the attention of the technology-savvy user community.

While millions of users have registered for free or subscription-based service accounts, hundreds of thousands of organizations, small and large, are already making use of cloud provider offerings.

Cloud computing may be readily accessed by any type of communication-enabled device or system. Traditional end-user devices such as smartphones, tablets, personal digital assistants (PDA), pagers, laptops, desktops, and mainframes are included on the top of the list of cloud access client devices. Although the first thing that comes to mind are those devices that we touch or type on, currently tens of thousands and in the near future millions of devices will directly access the cloud automatically.

For instance, some types of intrusion prevention systems and IT network appliances right now receive automatic updates from the cloud while transmitting information such as network attack signatures and profiles, including potential zero-day attacks. This attack data is automatically analyzed and immediately shared with other cloud-based devices within the same product group to inform and protect other networks with similar devices installed.

Cloud computing may be viewed as a huge data collection repository that monitors and collects information from sensors, industrial control devices, automobiles, and other mobile equipment such trucks, tractors, forklifts, boats, ships, and construction equipment as well as virtually any conceivable type of device with communication capability. Although the cloud may be a data collection repository, it may also be a collection of applications that download information, instructions, activity updates, and other operational reference data to connected devices.

Cloud Communication

By definition, the cloud is a collection of computer network hardware, such as servers, switches, routers, and other traditional networking equipment. These items operate in buildings that may be located across town, within the state, in several states away, or even in a totally different country. In the case of a private cloud, one that is privately owned and operated just for the organization, the cloud could actually be down the hall or within a data center in the same building.

Connecting to the cloud may be accomplished by a wide range of methods. Although most users communicate with the cloud through a personally owned device such as a ­laptop, tablet, or smartphone.

Although traditional methods of connecting to the Internet currently exist, other direct cloud connection methodologies are in place and used by a variety of products and devices. This includes satellite links directly to a cloud facility, hardwired packet-switching ­networks directly connected to a cloud facility, and other methods that may include radio transceivers.

Hybrid communication technology can aggregate communications within a geographic location. For instance, consider that your house is a cloud-connected entity. Imagine that you have network attached sensing devices installed around your home. There is an embedded controller in each item referred to as a dedicated computing device (DCD). Each device includes a processor, volatile and non-volatile memory, a variety of sensors and controllers, and a wireless transceiver, either Wi-Fi or Bluetooth. Each device may be specifically programmed to carry out a dedicated function, such as control something, report on something, or monitor something, and take a specific action upon an event occurring. Such items, as listed here, may be found around your home in the near future:

  • Intelligent refrigerator
  • Intelligent washer and dryer
  • Intelligent kitchen appliances
  • Household plant sensors
  • Outdoor lawn and plant sensors
  • Fish aquarium sensor
  • Hot water sensor
  • Hot tub and spa sensor
  • Vehicle sensor
  • Electrical power usage sensors
  • Household water usage sensors
  • Household air-conditioning system sensor
  • Mailbox sensor
  • Automated lighting control system
  • Entertainment system control and sensors
  • Automated air vent sensors and controllers
  • Utilities-monitored sensors
  • Intrusion detection sensors
  • Fire detection sensors
  • Flood detection sensors

And the list continues.

Each of the items in the preceding list represents both controllers and sensors dedicated to a specific purpose, with some of these devices being actually smaller than a key cap on a standard keyboard or possibly no larger than an aspirin. For instance, the vehicle sensor might report on the level of gas in the tank, air in the tires, or even a stored calendar of who's to use it next and their planned destination. The aquarium sensor may be a floating device that continually tests and reports on the temperature and quality of the water. The outdoor lawn and plant sensor may send continuous readings of soil moisture, nitrogen content, available sunlight, and other health and wellness parameters. Air-conditioning/heating unit sensors forward data on the outside compressor as well as the in-attic condenser and heating unit.

It is interesting to consider the future of “wearables.” Such devices will include sensors and devices manufactured into your shoes and clothing, strapped onto your body, and perhaps embedded on your body that will provide various capabilities such as parameter monitoring and functional control. Yes, pacemakers today are embedded devices that may report on certain conditions and have their parameters adjusted wirelessly by medical professionals.

Imagine that in the near future your third-grade daughter comes home from school early and reports that she feels sick. You pop open a bottle of sensors and she swallows one with a little water. Virtually immediately, you begin receiving diagnostic information on your cell phone measuring dozens of parameters within her body. The cell phone application automatically downloads cloud-based data coinciding with the parameter measurements of the ingested pill and recommends health and remedy methods for you to take.

All of these devices must communicate by some method. Not all sensors and controllers have the power to communicate directly to the Internet. Imagine a home-based proxy device that communicates by Bluetooth over short distances within the house and then converts the signals into Wi-Fi or cellular radio format for forwarding to the Internet. This may be simply a dedicated black box device that simply plugs into a wall outlet.

As you can see, all of these devices and capabilities will be available in a very short period of time. The challenge will be securing these devices with cloud security and encryption methods to mitigate the problems of intrusion and attack. Although you might be able to live with an attack on your swimming pool sensor, you might not if the attack is on your pacemaker. This illustrates the seriousness and requirements concerning cloud security.

Cloud Characteristics

The computing industry continuously seeks a common source of reference for concepts and terminology to better understand and communicate technology concepts. The National Institute of Standards and Technology (NIST) through NIST Special Publication 800-145, “The NIST Definition of Cloud Computing,” lays out some fundamental cloud-defining concepts.

The NIST special publication defines some broad categories that are characteristics of clouds:

  1. Broad Network Access Cloud services, whether private cloud offerings within an organization's IT department, non-fee-based free public clouds, or subscription-based services offered by large cloud providers, all include ease of access and use normal network connections. Cloud-based services may be easily accessed through the use of a standard Internet browser or client software downloaded to the user's device. For compatibility across a broad range of user-owned platforms, cloud access automatically transforms and reformats to the device requirements. This provides total access transparency to the user regardless of the device being used, be it a smartphone, smart watch, laptop, tablet, personal digital assistant, or intelligent entertainment center.
  2. On-Demand Self-Service Users can subscribe to services by simply selecting from cloud provider menus. Not only can they select cloud products, but during the process they can possibly engage a sharing and distribution system that, through the principle of discretionary access control, provides access rights to information and content. Various IT security publication articles are beginning to address the Bring Your Own Cloud (BYOC) concepts occurring in commercial businesses today and the possible security implications.
  3. Rapid Elasticity Elasticity allows the subscriber to purchase additional capability based on user requirements. For instance, a free cloud service might provide the user with 10 MB of storage space. Of course, this will be eclipsed fairly shortly. The user may then select a range of upgrades, usually for a small annual fee. In the event that the user commits to much larger storage than is required, they certainly have the ability to select a smaller volume of storage at a reduced annual fee.
  4. Pooling of Resources Cloud systems make use of virtualization to allow total hardware usage allocation. This means that rather than have one server with one client that uses the server 60 percent of the time, the same server might have several virtual machines running that use 95 percent of the hardware capability and can be adjusted for workloads very rapidly. For example, cloud hardware can immediately be reallocated to the demands of the client base. If the cloud-based client in Germany utilizes resources during this eight-hour workday and then shuts down in the evening, the same cloud-based hardware can be reallocated for another cloud-based client based in Atlanta, Georgia, as the workday begins. The expansion and contraction capability of cloud-based assets can be adjusted dynamically during the day or in relation to a holiday selling season where traffic volume is very high.
  5. Measured Service It's rare that in a standard corporate environment a department such as a finance or legal department may be directly billed and charged by the corporate IT department for the amount of assets, time, or space utilized. If this were true, there might not be such a contention over requesting IT budgets. Cloud providers, on the other hand, have determined charging methods that monetize the use of cloud services and assets. Not unsimilar to the charging methods utilized by utility companies, the cloud client pays for exactly what they use.

Security of the Five Cloud Characteristics

As a security practitioner, it will become abundantly clear that every benefit has a reciprocal potential of a security concern. The security practitioner should be concerned about the following initial broad cloud-based security issues:

Broad Network Access

Clouds may be accessed from a broad number of devices. This greatly expands the requirements for cloud-based access control as well as remote authentication techniques. Since clouds may be accessed from devices directly without them being connected to the organization's LAN, proper access and authentication controls must be provided at the cloud edge rather than in the business network. These controls must also be able to provide correct authentication across a number of personally owned devices and a variety of platforms.

On-Demand Self-Service

Cloud providers are in the business of selling services. They will gladly grant additional capabilities upon request. The organization must put into place corporate policies and organizational structures such as a change control board, a cloud services request control board, or a services request procedure that controls the allocation of cloud services to requesting corporate individuals, departments, and entities. This mitigates the risk of individual departments requesting additional services directly from the cloud service provider without prior authorization.

Rapid Elasticity

Any IT department that has managed a corporate “shared drive” knows how these can fill up very quickly. Also, it is usually very difficult to determine the owner of files and information stored on the shared drive. It's not unusual to find, for example, a series of PowerPoint presentations from 2002 with no information about whether they can be deleted or not. The same is absolutely true of cloud-based storage. Once storage space begins to expand, it is very difficult to contract it. Unlike the simple shared drive on an in-house network, cloud-based storage containing the same information will be incrementally more expensive.

Pooling of Resources

Cloud providers have an ability to pool resources as previously discussed. Some very serious security implications exist with this concept. First, cloud resources are shared among a huge number of tenants. This means that other users are on the same server equipment at the same time. The possibility that data may be written into another tenant's area exists. Second, in the event of another tenant conducting illegal activities, the entire server might be seized along with your data. Third, unscrupulous cloud provider personnel may access and exfiltrate your data. Fourth, investigations are complicated through the jurisdictional location of data on cloud service provider equipment. Five, information security is totally within the control of the service provider. If penetrated, corporate information, plus the information of many other clients, may be compromised. These are just a few of the security concerns for the security practitioner concerning cloud services.

Measured Service

It's not unusual in some countries to run a wire over to a neighbor's electrical meter. In fact, even in our country, theft of cable television services was quite a fad years ago. Theft of any measured service is possible if the attacker is determined. It is incumbent upon the cloud client to thoroughly check billing statements against authorized and requested services to mitigate the possibility of service theft.

Cloud Deployment Model Security

There are a variety of types of cloud services. Generally, they are classified by who owns the equipment upon which the cloud services are running and who are the potential clients that might use the services of the cloud provider. Again, NIST Special Publication 800-145 offers a list of four cloud deployment models. These cloud models are described in the following sections.

Private Cloud

A private cloud is typically a cloud constructed as a part of the organization's existing network infrastructure. This allows an organization to provide cloud services complete with expansion and elasticity components for internal clients. This type of file is best utilized in the following cases:

  • Data cannot be stored off-premises due to regulatory or contractual requirements.
  • Cloud provider costs exceed the cost to build and own.
  • Considerations exist regarding control over a location or jurisdictional, legal, or other proprietary information.

A private cloud may take the form of an intranet or an extranet with the added benefits of virtual allocation of assets and elasticity of storage.

Private clouds, having been created internally with organization-owned networking equipment, maintain the same security risks associated with any internal network. The organization must still provide the proper network safeguards, including intruder prevention controls and network access controls, as well as maintain all equipment with proper patching and updates.

Community Cloud

Community clouds can be established to provide cloud services to a group of users that can be defined as users requiring access to the same information to be used for a similar purpose. For instance, rather than set up an FTP site to transfer information between departments or external users, it is quite possible to set up a community cloud where everybody can access and share the same information. Using a specialized application, document versioning, access control, data file lockout, and user rights may be enforced. With a community cloud, it's easier to train inexperienced users in access control techniques based upon browser access rather than assign FTP accounts and educate users on the use of file transfer protocol techniques.

A community cloud may be as simple as a Dropbox location from which attendees to a family reunion may easily access all of the photographs that were taken. It's actually simpler to implement and explain the community cloud to inexperienced users than to explain the use of Facebook.

Security problems exist in a community cloud. Management of access control becomes a paramount interest. Authentication techniques as well as encryption key distribution become major factors between non-related third-party entities.

Public Cloud

The public cloud is probably the most common cloud platform. Public cloud services are offered by such providers as Microsoft, Google, Apple, and Amazon, among many others. The public cloud service is the easiest cloud offering for any individual to utilize. The most well-known have brand names such as SkyDrive, OneDrive, Amazon Web Services, iCloud, Google Drive, Carbonite, and Dropbox, and there are many others. The beginning monthly charge is usually less than $20, and they provide from 5GB to 500GB of storage. Most providers offer low-cost or no-cost entry-level services and escalate from there. Public clouds are easy to set up and easy to maintain.

Security problems exist in a public cloud environment. In the event that a public cloud is compromised, passwords, access control, and proprietary information may be exposed. The clients are purely at the mercy of the public cloud providers to provide adequate security.

Hybrid Cloud

Hybrid cloud structures consist of combining two forms of cloud deployments. Here are two examples:

  1. Private Cloud/Public Cloud This environment retains restricted or regulated information in-house in a flexible private cloud scenario. It provides the benefits of a public cloud to departments that require elasticity or cross-platform access.
  2. Private Cloud/Community Cloud This environment retains restricted or regulated information in-house in a flexible private cloud environment, while it provides extranet capability for suppliers or customers in a community cloud environment. This provides the flexibility of a community cloud with the continued elasticity and cross-platform access behind web browser functionality.

Hybrid clouds offer a great degree of flexibility to an organization that require cloud-based services but wish to capitalize on the cost savings afforded by cloud services providers. But, hybrid cloud security compounds the security threats that are of concern in a private cloud as well as the other cloud model that is included.

Cloud Service Model Security

Although there appear to be a number of “ _____ as a Service” offerings on the market, NIST Special Publication 800-145 offers a list of three cloud service models, which are described in the following sections.

Software as a Service

The Software as a Service (SaaS) model only allows the user or client access to an appli­cation that is hosted in the cloud. Such applications run on a cloud provider's equipment, and the SaaS provider manages all hardware infrastructure and security. Access is usually provided by identification authentication and is based upon browser-based interfaces so that users can easily access and customize the services. Each organization serviced by the SaaS provider is referred to as a tenant. The user or customer is the individual or company licensed to access the application.

There are several delivery models currently in use within SaaS:

  1. Hosted Application In a hosted application delivery model, the application vendor does not own the cloud equipment and is hosting the application for a cloud service provider. The application vendor maintains the application with patches and updates and generally makes access to the application available on a subscription basis or through corporate entitlement contracts. Hosted applications in the cloud function identically to applications that may be loaded on a user's workstation and are transparent to the end user. Users usually utilize an Internet browser to access the hosted application from any location.
  2. On-Demand Software This type of software delivery model features a software application that is owned or created by an application vendor and is hosted on the application vendor's cloud infrastructure. The application supplier manages and maintains the application as well as the cloud infrastructure. Access to on-demand applications is provided by a subscription on a pay-as-you-go basis or for a flat monthly fee.
  3. Cloud Provider Applications This usually consists of a suite of applications hosted on a commercial cloud provider's infrastructure and owned and maintained by the cloud provider. The software suite usually includes application development tools, hosting tools, graphics tools, and other software applications that are of interest to client IT departments to facilitate interaction with the cloud provider or to create applications, APIs, or other items that may be hosted by the cloud provider.

Platform as a Service

The Platform as a Service (PaaS) service delivery model allows a customer to rent virtualized servers and associated services used to run existing applications or to design, develop, test, deploy, and host applications. PaaS is delivered as an integrated computing platform, which may be used to develop software applications. Software development firms utilize PaaS providers to provide a number of application services such as source code control, software versioning tools, and application development process management tools. PaaS is used to build, test, and run applications on a cloud service provider's equipment rather than locally on user-owned servers.

Infrastructure as a Service

The Infrastructure as a Service (IaaS) service delivery model allows a customer to rent hardware, storage devices, servers, network components, and data center space on a pay-as-you-go basis. The cloud service provider maintains the facilities, infrastructure updates and maintenance, and network security controls. Although IaaS is the primary cloud service model, two types of specialized infrastructure service models exist:

  1. Dedicated Hosting/Dedicated Server Dedicated hosting is a type of hosting option in which the client leases an entire server, which is then dedicated solely to the client. The host provides the server equipment and usually provides administration services. This is an opposite application of the multitenant concept normally found in cloud service models. It may also be referred to as a dedicated server.
  2. Managed Cloud Hosting Managed cloud hosting is an IT networking process in which an organization extends its local network into a cloud-based environment. The organization may host critical applications extended over long periods or provide for rapid expansion to cloud-based equipment as required. The benefits include affordability, the ability to run both physical servers and virtual servers and thus provide consistent availability, and network security.

Cloud Management Security

Cloud management security encompasses the policies, standards, and procedures involved with the transfer, storage, and retrieval of information to and from a cloud environment. As with any network, there are four primary activities:

  • Executing applications
  • Processing data, known as the compute function
  • Transmitting and moving data, known as data in transit
  • Storing and retrieving data, as data at rest

Specific threats and vulnerabilities may be identified and addressed with the proper use of security controls.

Executing Cloud-Based Applications

Executing cloud-based applications involves user interaction with an application through an application programming interface (API) or virtual desktop environment. The cloud-based application is stored in cloud storage and executes on remote cloud service provider equipment. Cloud-based application execution provides several concerns for the security professional:

  1. Application Misconfiguration Applications may be configured incorrectly through the lack of appropriate patches, ineffective patches and upgrades, or incorrect setup and monitoring.
  2. Virtual Machine Attack Similarly to any other item on the organization's network, virtual machines are capable of attacks. A successful attacker or network intruder may perform a VM escape, where a hacker breaks out of a virtual machine with the potential of attacking the host operating system. Should a hacker take control of the hypervisor environment, they can successfully attack all of the virtual machines under the hypervisor.
  3. Access Control Violation Application access control is a concern since the subjects requesting access may not be first authenticated into a physical network. They may request access through APIs or web-based applications that may not have access to the same LDAP or Active Directory credential libraries.
  4. Software License Violation Improperly supervised software license appropriation may lead to excessive license assignments with the possibility of fines or sanctions against the organization.
  5. Application Design Vulnerabilities Applications executing on a remote environment are subject to the same design vulnerabilities as an application executing on physical servers owned by the organization. Injection attacks, scripting attacks, buffer overflows, and denial of service are a few of the standard attacks against application design vulnerabilities.

Processing Data in the Cloud

Data processing in the cloud is referred to as the compute function. Transactional data processing represents computations performed on a continuous stream of data sourced from such devices as cash registers and point-of-sale devices and input from business operations.

Executing a compute function in the cloud as part of a cloud-based application is identical to executing the exact same data processing function using a local-based application in an organization-owned data center. A cloud-based application vulnerability is a flaw or weakness in an application. Application design vulnerabilities are primary weaknesses that can be experienced regardless of location. There are a wide variety of hacker tools and penetration techniques used for exploiting application vulnerabilities. Here are a handful that are much more common than others:

  • Cross-Site Scripting (XSS)
  • SQL injection
  • Username enumeration
  • Format string vulnerabilities
  • Buffer overflow from multitenant clients
  • Virtual machine escape
  • Poor data validation

Security professionals should consider all aspects of application vulnerability mitigation when executing applications on cloud-based servers. A cloud-based application strategy should include the following:

  • Clear data security strategies such as performing risk analysis that includes the Three Ps of data processing within an application:

    1. Preprocessing The preparation process of data may involve sorting, screening, or normalization. This data is held in local storage and is queued for processing operations.
    2. Processing Data is transformed, converted, or altered by an application.
    3. Post-processing When data exits the application destined for a storage location, based on the usage and application, there may be several streams of data. This data may be further processed by other applications or sorted and stored using various techniques.
  • Identification and mitigation of potential attacks against the application.
  • Hardening of the application by controlling access.
  • Applying patches and upgrades, as applicable, and utilizing regression testing to identify problems after change. Cloud-based patches and upgrades should always be tested in offline but similar cloud-based virtual machines prior to being put into production.
  • Creating data protection policies and procedures for data being ported into and out of an application.
  • Utilize application logging, database rollback, and database journal procedures to thoroughly document cloud-based application performance.

Transmitting and Moving Data in the Cloud

Businesses have ported data from one location to another for a long time. Bulk data ­transfer is as old as early mainframe computers. The foundation of data transfer has been the AIC security triad of integrity, availability, and confidentiality. Of interest to ­organizations that move large amounts of data are nonrepudiation, end-to-end security, and auditability as well as providing effective continuous monitoring that ensure accurate performance metrics.

Managed file transfer (MFT) is the transfer of data to, from, and between clouds securely and reliably regardless of data file size. It has recently found favor among cloud service providers as well as third-party data transmission providers. Managed file transfer is a technique of transferring data between sites or within an organization's networking environment utilizing best practices.

There are three specific areas of MFT:

  1. File Size Every organization requires data storage for files that must be retained to meet the business objectives of the enterprise. When it comes to big data, very large data sets, or data warehouses, large blocks of data can be moved at one time. On the other hand, continuously streaming data is constantly moving between locations.
  2. Communication Reliability Communication methodology is a decision point between the client and the cloud service provider. Service-level agreements are implemented between the user and provider to determine the type and capability of communication lines and equipment, transmission methodology, and the ability of the cloud provider to be ready to accept and process data when the client is ready to send it. Communication reliability will also provide nonrepudiation as well as performance metrics.
  3. Data Security Data security policies are put in place to support the requirements mandated by contractual or regulatory compliance issues or internal organizational policy. Most MFT data security is accomplished through encrypted tunnels. For instance, data in-transit security between a user and a cloud provider may be provided by IPsec. IPsec could be used to provide encryption, authentication, as well as integrity to form a secure connection between the cloud provider and client.

Storing Data in the Cloud

Data stored in the cloud is stored on the cloud service provider's equipment. Data might be stored in large logical pools where the actual physical storage devices may span storage devices in several data centers, which are often geographically separated. Cloud storage devices range from traditional hard disks (HDDs) to solid-state drives (SSDs), which are referred to as tiered storage. Cloud providers often mix different types of data storage technology usually tiered from slow to fast storage. For instance, a balance of performance may be accomplished by placing directories, cross-reference tables, and other data that must be processed extremely fast in solid-state drives and placing remaining data in high-capacity hard disk drives while data rarely used may be stored in less expensive slower hard disk drives.

Data redundancy is often provided by the cloud provider and refers to storing multiple copies of data across numerous servers in geographically separated locations. The cloud service provider is responsible for maintenance of equipment, integrity of the data in storage, and protection of data when moved between service provider locations.

Data stored in a cloud environment sometimes requires specialized handling due to the fact that they may be virtually stored in a number of different locations. A storage mechanism such as erasure coding may be employed to supply data redundancy in the event of error or loss, while cloud-based data encryption may be based upon the nature of the data or encryption services provided by the cloud service provider.

  1. Erasure Coding Erasure coding (EC) is a data storage and data identification technology used to provide high-availability and data reliability to cloud-stored data. Using erasure coding, a block of data may be divided into a number (represented by N) of blocks, or “fragments,” and encoded with redundant data pieces. Each block may then be dispersed (a technique referred to as data dispersion) across the cloud provider's servers and, in many cases, geographic locations. If some of the data fragments are lost or corrupted for any reason, the entire data set may be reconstructed using a number (represented by M) of remaining fragments. In this technique M of N refers to the minimum required number of fragments required (M) to regenerate the data based on the total number of original fragments (N).

    Erasure codes, in actual practice, may be compared to RAID data reconstruction technique because of the ability to reconstruct data using codes. Erasure codes can be more CPU intensive and require more processing overhead. But, this type of failsafe coding can be used with very large “data chunks” and works well with data warehousing as well as with big data applications.

  2. Data Encryption Various types of encryption are utilized for cloud-based data storage. Encryption in the cloud is dependent upon the cloud service provider's capability as well as the cloud delivery model requirements. The safety and protection a corporate policy may dictate data at rest in a storage location should always be encrypted. The downside is that encryption always inserts additional processor overhead and latency or delay into both the data writing and retrieval process. There are two major categories of cloud-based data encryption:
    • Storage-level encryption is encryption performed at the storage device. Data entering the device is encrypted; data leaving the device is decrypted. Encryption keys are always maintained at the cloud service provider. Because the data is encrypted only on the device and the service provider has the keys, the benefit of storage-level encryption is realized only if the device is stolen.
    • Volume storage encryption requires of the encrypted data recyclable and storage. Data encryption keys are maintained with the data administrator or data owner.

Cloud Legal and Privacy Concepts

As the growth of the cloud continues, several major concepts are on a collision course. The cloud is a seeming panacea of everything good about computing. The capability for expansion, simplicity of configuration, ease of virtualization, and the total demand-as-you-need mentality is extremely attractive to individual users as well as major organizations.

Although some cloud services seem expensive today, as with any competitive utility model, the marketplace will bring prices down to acceptable levels. The advantages of the use case models as well as the economics of the pricing models will only serve to drive more and more users to the cloud.

With all of these conveniences, the challenge and complexity of complying with legislation, regulations, and laws issued by countries worldwide will become an ever-increasing challenge for cloud providers as well as cloud users. Accomplishing adherence, compliance, or conformity to international laws on a global basis will become extremely important as the use of the cloud continues to expand.

As with any regulatory and legal environment, legislation, regulations, and laws will continue to change as legislators and regulators take actions. As always, when dealing with legal, compliance, and regulatory issues, the best advice to security professionals is to consult with relevant and knowledgeable professionals who specialize in legal cloud issues.

Borderless Computing

Borderless computing is defined as a globalized service that is widely assessable with no preconceived borders. The main concepts of data ownership and personal privacy are paramount when considering the transborder data storage and processing capability the cloud offers. Cloud provider offerings and the relevant contractual relationships will often refer to availability zones, usually on a global nature, which subjectively are areas of operation, data storage, data processing, and other activities carried out by a cloud provider. These availability zones define geographic areas such as North American zone/South American zone, which are sometimes combined and just referred to as “the Americas.” Other zones include the Asia-Pacific zone and EMEA zone, which refers to Europe, the Middle East and Africa. Although these are somewhat standardized business operational territories, they become extremely large and unwieldy when you take into account transborder data flows, in-country data storage, and local data processing when related to complying with numerous country laws and regulations. Many cloud providers often segment the traditionally large business zones into smaller cloud operational zones based on geography or similarity of legislation, such as that of the European Union (EU).

Privacy Issues

Various international laws and regulations typically specify responsibility and accountability for the protection of information. Accountability for the adherence to these laws and regulations remains with the owner of the information. Therefore, corporate entities and organizations that utilize locally based services must impose the same accountability requirements upon the cloud service provider through contractual clauses specifying compliance requirements.

Personal data is defined as any data relating to a natural human being referred to as a data subject. An identifiable human being is a person who can be identified, directly or indirectly, by one or more factors specific to their physical, physiological, mental, economic, cultural, or social identity.

Legal Requirements

Numerous legal issues and requirements become relevant when personally identifiable data is collected, processed, transmitted, and stored in global-based cloud environments.

A number of legal requirements and compliance issues are of concern to data owners as well as cloud service providers.

  1. International Regulations/Regional Regulations International privacy regulations range from an unenforced to very strong depending upon the countries basic philosophy to ensure the personal privacy to its population. Countries that are bound by the European Union data protection laws, the OECD model sponsored by the Organization for Economic Cooperation and Development (OECD), or the APEC model sponsored by the Asia-Pacific Economic Cooperation (APEC) are supporting very strict personal privacy requirements. The laws and regulations of these organizations generally state that the entity that originally obtains the personally identifiable information from an individual is responsible for ensuring that any users of this information comply with all applicable laws.
  2. Restrictions of Transborder Information Flow Globally, some national legislation, laws, and regulations enacted by various governments provide restrictions concerning the transfer of personally identifiable information to local jurisdictions where the level of privacy or data protection is considered weaker than the original storage location. This concept is similar to declassifying information by transferring the information to a lower-classified folder. This type of legislation is intended to enforce that whenever data transfer occurs, the data will maintain its integrity of privacy and protection.
  3. Contractual Privacy Obligations Many corporations and organizations are contractually bound to safeguard the privacy of personally identifiable information that is in their possession. Some of these contractual obligations may fill the gap created by the lack of specific laws and legislation. Typically, contractual privacy obligations are imposed by third-party contracts with information providers, industry consortiums, and other groups. For example, the Payment Card Industry Data Security Standard (PCI DSS), which is a proprietary information security standard promoted by major credit card providers, establishes many standards for the protection of personal privacy and the handling of personally identifiable information.
  4. Exploitation of Data Privacy Individuals seeking the exploitation of restricted data, such as data that is confidential, consists of personally identifiable information, or is restricted by legal or regulatory dissemination prohibitions, make use of geographic or legal environments that are seemingly friendly to the concept of the computer network, which operate in this environment. A data haven is a location that is friendly to the concept of data storage regardless of the legal environment. Data havens may be geographic locations within country borders that have loose regulatory requirements, or they may be onion networks such as The Onion Router (TOR) network, which either stores or forwards information based upon anonymity.

Privacy Legislation and Regulations

Numerous laws and regulations have been enacted through the years to address requirements of personal privacy protection and accountability of the holders and processors of data and to provide transparency to operations.

  1. Directive 95/46 EC A highly developed area of law in Europe involves the right to privacy. In 1980, the Organization for Economic Cooperation and Development (OECD) issued guidelines governing the protection of privacy in transborder flows of personal data. The seven principles governing the OECD's recommendations providing privacy protection are as follows:
    1. Notice Notice should be provided to subjects when their personally identifiable data is being collected.
    2. Purpose Data should be used only for the purpose stated and not for any other purposes.
    3. Consent Personal information should not be disclosed without the data subject's consent.
    4. Security Collected data should be kept safe and secure.
    5. Disclosure Data subjects should be informed as to what entity is collecting their data.
    6. Access Data subjects should be allowed to access their personally identifiable data and make corrections to any inaccurate personal data.
    7. Accountability Data subjects should have a method available to them to hold data collectors, who are in charge of personally identifiable data, accountable for not following the preceding principles.
  2. Unfortunately, the OECD guidelines were nonbinding and were viewed only as recommendations, while privacy laws remained mixed across European nations.
  3. General Data Protection Regulation The General Data Protection Regulation (GDPR) is an EU law that is the successor to Directive 95/46 EC. This legislation is intended to unify the data protection personal information rights within the 28 European Union member states. It is intended to address the ad hoc application of Directive 95/46 EC, which was only a directive and was viewed as optional by several countries. The General Data Protection Regulation will not require ratification of the member states but will bind them to a unified legal framework concerning information privacy. The law will introduce many significant changes for data processors and controllers, among which is a single set of rules with which EU member states must comply. The following may be considered some of the more important changes:
    • Control of international data transfer
    • Establishment of the role of a data protection officer
    • Timely processing of access requests
    • Single set of rules for all member states
    • Definition of responsibility and accountability
    • Increased sanctions for violations

    The original term in the prior regulation was the so-called right to be forgotten. Although a noble concept and idea, it was vaguely worded, and it offered no means or methods for enforcement. It has been replaced in the GDPR by a better defined right to erasure, which provides the data subject with certain capabilities and means to request elimination of personal data based upon a number of explicit grounds. This allows for the exercise of the fundamental rights and freedoms of the data owner over the desires of the data controller.

    Fundamental concepts such as privacy by design as well as data protection impact assessments (PDIAs) built in the requirements that privacy be a default operational scenario and impact assessments must be conducted identifying the specific risks that might occur to the rights and freedoms of data subjects.

    A safe harbor provision is a provision of a statute or a regulation that specifies that certain conduct will be deemed not to violate a given rule. Applied to U.S. companies operating within the confines of the original EU Directive 95/46 EC, the Safe Harbor Privacy Principles allows U.S. companies to opt in to the EU privacy program and to register their certification if they meet the European Union requirements in the storage and transfer of data. The safe harbor program was developed by the U.S. Department of Commerce; it covers the U.S. organizations that transfer personal data out of the geographic territory of the 28 European Union member states referred to as a European Economic Area (EEA). Organizations utilizing safe harbor privacy principles may also incorporate information usage clauses within contractual agreements concerning the transfer of data.

  1. Health Insurance Portability and Accountability Act of 1996 The Health Insurance Portability and Accountability Act of 1996 (HIPAA) legislation in the United States directed the Department of Health and Human Services to create and adopt national standards for electronic healthcare transactions and to identify and categorize national identifiers for providers, health plans, and employers. Protected healthcare information can be stored in the cloud under HIPAA regulations as HIPAA-protected data.
  2. Gramm-Leach-Bliley Act The Gramm-Leach-Bliley Act (GLBA) is United States federal law concerning banking regulations, banking mergers and acquisitions, and consumer privacy regulations. Major sections of the law put in place controls and governance over the collection, disclosure, and protection of consumers and nonpublic personal information, including personally identifiable information. This includes the protection of personal financial information as well as notification to the consumer of the collection and dissemination of information. Financial institutions are required to provide each consumer with the privacy notice stating the facts concerning where the information is shared, how the information is used, and how the information is stored. Through the Safeguards Rule, provisions very similar to the EU's General Data Protection Regulation were put into place, including a central point of contact and a data privacy risk analysis for each department handling nonpublic information. Pretexting provisions prohibit the practice of obtaining private information through false pretenses.
  1. Stored Communications Act The Stored Communications Act (SCA) is a U.S. law that addresses the disclosure of electronic communications and data held by Internet service providers (ISPs) under various circumstances. It was enacted into law as Title II of the Electronic Communications Privacy Act of 1986 (ECPA).

    ECPA describes the conditions under which the government is able to access data and can help Internet service providers disclose private content stored by a customer.

    1. Electronic Communication Service This section stipulates that the government must obtain a search warrant to compel an ISP to turn over an open email that has been in storage for 180 days or less.
    2. Remote Computing Service This section of the law states that if data has been stored in excess of 180 days, the government can use various legal documents to require the disclosure of the information.
  2. Patriot Act The Patriot Act was enacted in direct response to the actions of terrorists on September 11, 2001. The act was instrumental in changing a large number of prior laws and strengthening the security of the United States against the threats of terrorism. Many privacy laws were greatly affected by the Patriot Act and remain to this day highly controversial. Titles under the act modified, reduced, or expanded the provisions of a large number of laws:
    • Title I: Enhancing domestic security against terrorism
    • Title II: Surveillance procedures
    • Title III: Anti–money laundering to prevent terrorism
    • Title IV: Border security
    • Title V: Removing obstacles to investigating terrorism
    • Title VI: Victims and families of victims of terrorism
    • Title VII: Increased information sharing for critical infrastructure protection
    • Title VIII: Terrorism criminal law
    • Title IX: Improved intelligence
    • Title X: Miscellaneous

eDiscovery

Discovery is a legal technique used to acquire knowledge required in the prosecution or defense of lawsuits or used in any part of litigation. eDiscovery specifically describes electronic information that is stored in or transferred over network storage devices, cloud systems, and other information repositories. The acquisition of such data is based on local rules and agreed-upon processes usually sanctioned by courts with jurisdiction; attorneys for either side request access to various data and are allowed access to data based upon court-established criteria. Attorneys may place a legal hold on data that is to be collected and analyzed as potential evidence in litigation processes. The Federal Rules of Civil Procedure, made effective in 2006 and 2007, substantially enhanced the proper retention and management of electronically stored information and compelled civil litigants to comply with proper handling techniques or be exposed to substantial sanctions. In the event of mishandling of electronically stored information, a finding of spoliation of evidence may be handed down by the court.

The following types of data are defined as electronically stored data:

  1. Electronic Messages In 2006, the U.S. Supreme Court recommended that a category be created under the Federal Rules of Civil Procedure that specifically included e-mails and instant message chat information.
  2. Voicemail Voicemail has been deemed data that is subject to eDiscovery and may be subject to litigation hold. A litigation hold is a legal requirement for participants of a lawsuit to retain and preserve records and evidence. Participants of lawsuits may have a duty to retain voicemail records that may be requested during legal proceedings.
  3. Databases/Data in Repositories Information that is subject to the discovery may be stored throughout the enterprise. It may be located in personnel notes and files, in databases and in stored communications. Structured databases referred to as relational database management systems (RDBMSs) store large amounts of data.

Cloud-based data storage provides challenges for the security professional as well as attorneys in identifying, locating, and obtaining information that may be subject to eDiscovery. Although the Federal Rules of Civil Procedure (FRCP) in the United States require parties to litigation to be able to produce requested electronically stored data that is in their custody, possession, and control, there is much debate concerning is the custody, possession, and control at the cloud provider or the information owner level.

Location is also an extraordinary problem in the production of requested electronically stored information in the cloud. Cloud information can be stored in a number of country jurisdictions and, due to failover or load-balancing activities, may be transferred between countries and jurisdictions frequently and at will. Obtaining specifically requested information from the cloud provider may be difficult under current laws, regulations, and contractual responsibilities.

Contracts with cloud service providers should also include that the service provider is required to inform the information owner in the event of any court-ordered legal action with regards to their data stored by the provider.

Cloud Virtualization Security

Virtualization is the essential technology provided for loud-based implementation of services. Through virtualization, virtual machines are separated from the underlying physical machines. In a virtual environment, the virtual machine is developed under a hypervisor and utilizes the underlying physical hardware. The virtual machine that was created is referred to as a guest machine while the physical server is referred to as the host machine. A hypervisor controls all of the interactions between the virtual machine and the host machine physical assets such as RAM, CPU, and secondary storage.

Virtual technology may be attacked just as with any the other technology. A benefit of a virtual machine is that it can be taken down and immediately re-created in the event of a penetration or attack. The virtual environment separates the attacker from the underlying hardware, although it is possible for a dedicated experienced hacker to successfully attack the root and attack the hypervisor. Once in the hypervisor, the attacker has access to all of the virtual machines controlled by that hypervisor.

Security, as applied to virtualization, includes a collection of controls, procedures, and processes that ensure the availability, integrity, and confidentiality of the virtualization environment. Security controls can be implemented at various levels within the virtual ­environment. Controls can be implemented directly on a virtual machine or address vulnerabilities in the underlying physical device.

Secure Data Warehouse and Big Data Environments

Big data refers to data sets that are so large and complex that through predictive analytics or other advanced methods to extract value from data, new correlations in an effort to spot business trends, prevent diseases, combat crime, promote solutions for complex physics simulations, and provide for biological and environmental research are made available. Data sets grow in size due to the ever-growing availability of cheap and numerous information sensing and gathering technology.

It is predicted that, as of 2014, in excess of 667 exabytes of data pass through the Internet annually. The retail, financial, government, and manufacturing sectors all maintain data repositories that are continuously analyzed to spot trends for the exploitation of marketing and other opportunities. For instance, it is reported that Walmart records in excess of 1 million customer transactions every hour, which are forwarded to databases estimated to contain more than 3.1 petabytes of data. This data is continuously analyzed to spot trends, outages, buying preferences, and other pertinent information on a worldwide basis. Big data is also used for climate change simulations, predicting financial markets as well as for military and science applications.

The data sets are so large that it is not possible to analyze the data on standard PCs or even mainframe computers. Standard relational database management systems, as well as predictive analytics and data visualization applications, are not robust enough to handle data analysis. The analysis instead requires hundreds or even thousands of servers running in parallel to perform the required processing. It is growing so fast that what was defined as big data only a few years ago is commonplace today. It is growing so large and so fast that analytic applications are having difficulty keeping up.

Big data has been described as having three vectors: the amount of data, the speed of data in and out of the database, and the range of data types and sources. In 2012, Gartner, Inc., updated a cloud definition as follows: “Big data is high volume, high velocity, and/or high variety information assets that require new forms of processing to enable enhanced decision making, insight discovery and process optimization.” The data analysis requires new forms of integration to uncover large hidden values from large data sets that are diverse, complex, and of a massive scale.

Data Warehouse and Big Data Deployment and Operations

Big data requires exceptional technologies and the use of a virtual architecture as an option for dealing with the immense amount of data gathered and processed within a system. Multiple processing units designed as distributed parallel architecture frameworks distribute data across thousands of processors to provide much faster throughput and increased processing speeds.

MapReduce is a parallel processing model and application used to process huge amounts of data and is used as a method of distributing database queries across hundreds or thousands of machines, which then process the data in parallel. The results of these database queries are analyzed, gathered, and delivered. The MapReduce framework is an application that distributes the workload of processing the data across a large number of virtual machines. The Hadoop Framework is a big data file system that is referred to as the Hadoop Distributed File System that stores and maintains data in a very large format system. The MapReduce application uses the Hadoop Distributed File System.

A large number of companies, including eBay, Amazon, and Facebook, make use of the data and data warehouses to analyze consumer purchasing and merchandising trends. It is been estimated that the volume of business data worldwide doubles in just over 14 months.

Securing the Data Warehouse and Data Environment

Big data makes use of a distributed processing architecture featuring tens of thousands of server clusters working in parallel. With the data volume, data velocity, and sheer magnitude of data storage and transport in a parallel processing environment that is web-based, securing the systems is much more difficult.

The immensity of the data and requirement for parallel processing increases the magnitude of encryption as well.

Encrypted search and cluster formation in big data was demonstrated in March 2014. Encryption speed and system resilience are the primary focus of the data encryption goals. Researchers are proposing an approach for identifying the encoding techniques to be used toward an expedited search without decoding and re-encoding data during the search process. This is leads to the advancement toward eventual security enhancements in data security.

Big Data Access Control and Security

The foundations of security are defined as availability, integrity, and confidentiality, and each is pushed to an absolute limit with the concepts of big data. Access control requires that a data owner classify data according to some criteria and provide access control decisions. The data represented in these monumental structures is so large and comes from such a variety of locations and sources that is very difficult to have one person or even a team of people assuming the traditional ownership role of the data when it is in the matrix.

The handling of big data creates challenges for many organizations. They must first determine data ownership and then determine the attributes that should be assigned to the data. One approach is to assign data ownership to the data outputs of search and ­analysis operations, which may be much more manageable than attempting to classify the entire database.

New terminology may appear where the data owner controls the data on the input side and the information owner assumes control of the data on the data output side. Two types of data ownership are surfacing out of this concept. The data owner places the data into the database, classifies the data, and determines access criteria. The information owner, on the other hand, owns and classifies information after a resulting process such as a search, sort, or analysis procedure.

Secure Software-Defined Networks and Virtual Environments

Virtual environments offer the capability of running numerous virtual machines on a single underlying server, thus taking advantage of economies of scale and enabling the most efficient use of the underlying hardware. Virtualization increases resource flexibility and utilization and reduces infrastructure costs and overhead.

Software-Defined Networks

As you saw in Chapter 8, to date, conventional network design has developed from a hierarchal structure such as tree, ring, star, and other network topologies built upon physical network connection devices such as servers, routers, and switches. In the world of client/server computing, the static design was suitable because once the client was connected to a server, the hardware routing did not change. With the dynamic computing requirements and increasing storage needs of today's enterprise data centers, designers have sought a much more flexible, dynamic means of interconnecting devices. These revolutionary design changes have been prompted by several environmental changes within the IT industry as well as resulting from business demand. Some of these changes are listed here:

  1. Changing Data Traffic Patterns Traditional data traffic featured client computers connected through a number of switches or routers and hardwired to a server. The job of the server was to acquire information from storage devices, process the information, and serve it to the client computer. The client computer, in this environment, was usually executing an application locally and was accessing the server environment purely as a communications method to the outside world of the Internet or two other servers that provided communications applications such as email and other services.

    North/South is a classic data path in most enterprise networking environments. It refers to the standard data channel between a lower-level client computer and a higher-level server computer. As data centers have grown and the demand for higher-level applications that access multiple databases on many different servers has increased, a single request from a client computer may trigger a large amount of communications horizontally between a number of communicating servers containing databases and other components required for the communication. This data path is referred to as East/West, which describes a horizontal or machine-to-machine data path. Today's data processing environment requires a dynamic capability to set up data paths as required for the task at hand. Hardwired static data paths may no longer suffice.

  2. Device-to-Machine Access Requirements Users desire access to content and applications from multiple device formats. Personally owned devices (PODs) are quickly becoming the norm within most computing environments, which offer the ability to connect from anywhere at any time. In most organizations, users are employing mobile personal devices to access the organization's network. Network access must be established in such a manner as to provide flexibility in that the same user may be connecting using different devices and requiring different data formatting and different data presentation methods.
  3. Virtualization and Cloud Services Organization IT departments have wholeheartedly embraced all of the different types of cloud services, resulting in a substantial demand for virtualized networks. Network users now demand the capability to access applications, infrastructures, and other IT resources on demand from a wide variety of source devices.
  4. The Demand for Dynamically Configured Network Environments Big data, defined as mega data sets, is driving the requirement for the virtual configuration of massive processing capability, where thousands of servers are configured to parallel-process mega data sets well connected to each other. Virtualized networks will enable any connectivity required by massive cloud-based processing

Software-defined networking (SDN) is a virtualization methodology in which the actual data flow across the network is separated from the underlying hardware infrastructure. This allows networks to be defined almost instantaneously in response to consumer requirements. This is accomplished by combining common centralized IT policies into a centralized platform that can control the automation of provisioning and configuration across the entire IT infrastructure. Network configurations can be configured within minutes using virtualized technology rather than spending hours or days rewiring and reconfiguring hardware-based data centers.

Software-Defined Networks in Practice

There are a number of software-defined network application providers offering a wide selection of products. To date, no one standardized application is surfacing as a clear leader. There are several commonalities between all of the SDN models:

  1. Southbound APIs Information is sent to the underlying hardware infrastructure with provisioning and deployment instructions.
  2. Northbound APIs Information concerning network operation data volume and other considerations is communicated from the hardware layer to the applications and business logic. This allows operators to monitor network operations.
  3. SDN Controller The software-defined network controller is the central operational application that allows network administrators to design the virtualized network system using underlying hardware infrastructure.

Network administrators make use of preconfigured virtual machine images sometimes referred to as snapshots that are ready to deploy on a hypervisor. A virtual appliance is a virtual machine image that is preconfigured and includes preloaded software and is ready to run in a hypervisor environment. The virtual appliance is intended to eliminate the installation, configuration, and maintenance costs associated with running complex virtual environments by preconfiguring ready-to-use virtual machine images.

There is a difference between virtual machines and virtual appliances. A virtual machine is a generic, stand-alone virtualized platform that consists of a CPU, primary storage (RAM), secondary storage (hard disk), and network connectivity. A virtual appliance is in fact a virtual machine by definition, but it also contains the operating system as well as software applications that are preconfigured and ready to run as soon as the virtual appliance image is in place. The major difference between a virtual machine and a virtual appliance is the configuration of the operating system. Most virtual machines use a standard preconfigured image of an operating system that includes all of the latest patches and upgrades, whether it be Windows, Linux, or another OS. In a virtual appliance, the operating system may be customized for the specific application and environment in which the appliance will be performing. This stripped-down or specifically configured operating system is referred to as just enough operating system (JeOS), which is pronounced “juice.”

Host Clustering

A host machine provides the underlying hardware upon which virtual machines are running. A single host may run a very large number of virtual machines all sharing the same CPU, RAM, hard drive, and network communications capability. Host clustering is a method whereby numbers of host machines may be logically or physically connected so that all of their resources (such as CPU, RAM, hard drive, and network communications capability) can be shared among all of the hosted virtual machines. Host clustering provides the ability to expand and demand more attributes as a workload increases as well as provide the safety failover to another host in the event one host fails.

Within host clustering, all of the resources for all of the host computers are managed as if they were one large machine and are made available to the virtual machines that are members of the cluster. With a large number of cluster members (virtual machines) in contention for the limited resources of the host cluster, various resource sharing concepts are used to define the allocation of resources. The cluster administrator may define the requirements and allocations for each member of the cluster using the following techniques:

  1. Reservations A virtual machine may be allocated a minimum amount of pooled resources, thus guaranteeing the availability of the resources for performing its application.
  2. Maximum Limit A virtual machine may be restricted to a maximum amount of pooled resources, thus guaranteeing that a single virtual machine or application can demand excessive resources, thereby creating a denial of service for other virtual machines and applications.
  3. Shares Allocation Shares allocation is a technique of prioritizing and distributing any remaining pooled resources among the cluster member virtual machines after all reservations have been fulfilled.

Storage Clustering

Storage clustering is the use of several storage servers managed and interconnected together to increase performance, capacity, or reliability. Storage clustering distributes workloads to each storage server and manages access to all files and data stores from any server regardless of the physical location of the files and data stores. Storage clustering should have the ability to meet the required service levels (specified in SLAs), keep all client data separate, and adequately safeguard and protect the stored data. The benefits of virtual storage clustering are as follows:

  • One design of pooled storage devices is constructed of groups or arrays of inexpensive disks called just a bunch of disks (JBOD). This array features a large number of inexpensive disks of varying sizes and capabilities attached with Serial Attached SCSI (SAS) host bus adapters.
  • Pools of storage devices can be used to create large virtual disk storage. RAID-based storage methodologies such as simple, mirrored, and parity will be handled by the virtual storage controller, with parity being automatically striped across drives.
  • Data dispersion will automatically copy data around the storage cluster, utilizing space efficiently and creating mirrored copies as requested. This will achieve space utilization load balancing to effectively utilize pool storage capability by storing data anywhere it will fit.
  • Rules such as anti-affinity or affinity rules enforce hypervisor policies that keep some entities such as VMs together or separate depending upon requirements.

Two basic types of storage clustering architectures exist:

  1. Loose Coupled Cluster A loose coupled cluster is very typical of the use of JBOD technology in the hard drive array. It may start small and grow larger as disks are added. It is an informal assemblage of disks, all of which are virtualized into an apparent virtual single drive. The restriction on this type of disk clustering is the speed of the data interface. This may limit the accessibility to the entire drive stack. As more drives are added, access latency may increase.
  2. Tight Coupled Cluster A tight coupled cluster is a drive array that is usually provided by a single manufacturer and features a proprietary physical backplane, which maintains connectivity to both drives and controller nodes. This type of cluster usually has an initial fixed drive size and delivers very high-performance interconnect between servers for load-balanced performance. Tight coupled cluster products may be initially available in a variety of sizes and may growth through additions of both drives and controllers as space requirements dictate.

Security Benefits and Challenges of Virtualization

Attacks and countermeasures used in virtualized environments are very similar in nature to any attack and countermeasure utilized in a traditional hardwired network. Appropriate risk assessments should be made to identify probable threats and vulnerabilities within the environment. Organizations should conduct regular tests and assessments as well as provide for continuous monitoring and logging to identify the effectiveness of security controls and countermeasures.

Security Benefits of Virtualization

The virtualization of various network components offers many advantages over using static network devices:

  1. Desktop Virtualization Virtual desktop infrastructure, or VDI, is an implementation of a desktop display complete with running applications that is presented on the client computer or thin client device. The actual applications, stored data, and underlying computing infrastructure are operating in a central location, only the resulting images are sent to the client device. In a virtual desktop infrastructure (VDI) environment no applications or data is required to be loaded at the client location. This eliminates the risk of an attacker gaining access to applications or data stored on a client computer.
  2. Virtual Machine Rollback Snapshots may be made of a virtual machine at any time. A snapshot is an exact copy of a virtual machine at a point in time and may be used to reload or repair a virtual machine. In the event the virtual machine is successfully attacked, the machine can be easily rolled back to a state prior to the attack.
  3. Virtual Machine Isolation Virtual machines run in isolated VM environments under a hypervisor running on a host machine. Should the operating system or application running on a virtual machine be attacked, only one virtual machine and not the entire environment is affected.
  4. Resource Pooling Through host clustering techniques, resources are pooled and allocated among virtual machines. This allows for the economic use of resources as well as the expansion capability of any virtual machine or application. Clustering techniques also include clustered storage, which prevents loss of data should a single device fail or be compromised.

The benefits of server virtualization include better resource allocation as well as the ability to revert to an earlier snapshot or operating condition in the event of a problem or attack. Desktop virtualization referred to as virtual desktop infrastructure (VDI) allows more flexibility to push out the correct image to a user device depending upon the required format of the device.

Security Challenges of Virtualization

Substantial security challenges surface with the use of virtualization:

  1. Hypervisor The Hypervisor allows virtual machines to communicate between them. This creates a channel of communication that is no longer monitored by standard monitoring and threat mitigation methods. The data is virtually invisible to the standard hardwired network environment. If a hypervisor is compromised, the attached virtual machines will also be compromised.
  2. Virtual Machine Theft Extreme care must be taken to protect snapshot images of virtual machines. These copies of a virtual machine image can easily be stolen through the use of portable storage devices or may be exfiltrated from the company of a number of methods.
  3. Snapshot Reverting In the event of data corruption or a machine attack, a virtual machine can be rolled back to a previous snapshot or image of a virtual machine at a previous point in time. When this reversion takes place in a configuration, application changes or data updates may be lost.
  4. Rogue Virtual Machines Virtual machines can be created by users without the knowledge of the IT department. They may or may not conform to organizational security policies.
  5. Insecure Trust Levels Virtual machines operating a different trust levels should not be installed on the same host. Security of all of the virtual machines on a host or server is that of the lowest trust level of any one machine.
  6. Virtual Machine Attack Although virtual machines are isolated from each other, in the event of an attack or a compromise, a virtual machine may demand an inordinate amount of host resources. This demand for increased resources will create a denial of service for any other virtual machines on the same host.

Summary

As a security practitioner, you will be closely involved with various aspects of incident handling involving the penetration of malicious code and malicious software. It is important to have an understanding of the different types of malicious code and the factors upon which it may arrive at the network or host location. Knowledge of the various countermeasures, including both software and hardware appliances, is required. Proper maintenance, patching, and upgrading is required of all software and hardware. It is the responsibility of the security professional to upgrade anti-malware software and devices to maintain an adequate operational base level. All patches and upgrades should be tested in offline systems that are similar to production-grade systems prior to pushing out to production.

Endpoint devices include not only host computers but also portable devices and network nodes such as printers or scanners. Each device requires proper protection mechanisms and controls to ensure that it is adequately protected from attack.

The cloud is certainly a big buzzword in IT circles. But with the cloud comes extensive security requirements. Data must be secured in transit to and from the cloud servers and encrypted at rest in the cloud location. It is important to understand encryption mechanisms as well as storage and retrieval techniques used in cloud storage. Multi-tenancy refers to the practice of placing multiple tenants on cloud-based servers. This technique of dynamically sharing hardware can cause a number of security challenges.

Along with cloud comes the concept of big data. Big data and data warehousing are techniques of amassing huge amounts of data that may bridge across a number of storage locations. This amount of data is so large that specialized techniques of processing the data in parallel have been developed. Data ownership, classification, integrity, and confidentiality are all challenges facing the security practitioner.

The cloud is made up of virtual environments, which make the best use of hardware in an organization-owned data center. The software-defined network creates the concept of virtual machines and virtual storage that run on top of physical machines. These machines may be placed into operation and taken down very rapidly. Although a virtual server runs in a virtual environment, it is not without security concerns. Virtual machine escape is an attack whereby the attacker literally jumps out or “escapes” from the virtual machine and obtains control of the hypervisor or another virtual machine. With hypervisor privileges, the attacker has complete control of all of the other virtual machines as well as the underlying hardware infrastructure.

Of all of the ideas covered in this book, the concept of the cloud as well as virtual environments is among the most important for the SSCP to learn for the future. The cloud may definitely guide the future of information technology.

Exam Essentials

  1. Hackers, Script Kiddies, Hacktivists Be able to describe all of the different types of attackers.
  2. Viruses and Worms Understand the different types of malware, including viruses, worms, key loggers, spyware, and APTs.
  3. Social Engineering Know the types of social engineering attacks, including phishing, vishing, and spoofing, among others.
  4. Email Management Understand spam management as well as the hazards of clicking email links and opening attachments.
  5. Cookies Understand that cookies may contain personal information that may be the target of an attacker. Persistent cookies offer a method of hiding this information in a number of locations on a PC.
  6. Protocol Analyzers Be able to describe the methods protocol analyzers use to obtain data from the network and what the analyzers do with this information.
  7. Vulnerability Scanners Understand the types of vulnerability scanners.
  8. Physical Security Be able to describe securing endpoint devices using physical security.
  9. Host Firewall Know the purpose of a host firewall and how it can be set to block or allow information.
  10. Full Disk and File Encryption Understand the different techniques and tools used to encrypt folders or specific files.
  11. Data Sanitization Understand the methods involved in data sanitization to meet various regulatory and contractual requirements.
  12. Endpoint Hardening Know the reasons for and various methods of endpoint heartening.
  13. Cloud Concepts Be able to describe the floor cloud deployment models as well as the three cloud service models.
  14. Software-Defined Networks Understand the basics of network virtualization and the creation of data pathways that are separate from the underlying hardware infrastructure.
  15. Host Clustering Know the reasons for and benefits of host clustering. Be able to describe failover as well as load-balancing techniques.
  16. Storage Clustering Understand and be able to describe the reasons for storage clustering and the two major methods of drive coupling techniques.

Written Lab

You can find the answers in Appendix A.

  1. Write a paragraph explaining the relationship between the hypervisor and a virtual machine.
  2. Describe the difference between a hacker, a certified ethical hacker, and a script kiddie.
  3. Briefly explain the EU General Data Protection Regulation.
  4. Describe the four cloud deployment models and three cloud service models.

Review Questions

You can find the answers in Appendix B.

  1. Which of the following is the most likely to attack using an advanced persistent threat?

    A. Hacker

    B. Nation state

    C. Cracker

    D. Script kiddie

  2. Which of the following options best describes a hacker with an agenda?

    A. Cracker

    B. Nation state

    C. Anarchist

    D. Hacktivist

  3. Which statement most accurately describes a virus?

    A. It divides itself into many small pieces inside a PC.

    B. It always attacks an email contacts list.

    C. It requires an outside action in order to replicate.

    D. It replicates without assistance.

  4. Which option is not a commonly accepted definition for a script kiddie?

    A. A young unskilled hacker

    B. A young inexperienced hacker

    C. A hacker that uses scripts or tools to create a text

    D. A highly skilled attacker

  5. Which choice describes the path of an attack?

    A. A threat vector

    B. A threat source location

    C. The threat action effect

    D. A threat vehicle

  6. Which choice best describes a zombie?

    A. Malware that logs keystrokes

    B. A member of a botnet

    C. A tool used to achieve privilege escalation

    D. A type of root kit

  7. Which answer best describes an advanced persistent threat?

    A. Advanced malware attack by a persistent hacker

    B. A malware attack by a nation state

    C. An advanced threat the continuously causes havoc

    D. Malware that persistently moves from one place to another

  8. Which choice is the most accurate description of a retrovirus?

    A. A virus designed several years ago

    B. A virus that attacks anti-malware software

    C. A virus that uses tried-and-true older techniques to achieve a purpose

    D. A mobile virus that attacks older phones

  9. Which choice is an attack on a senior executive?

    A. Phishing attack

    B. Whaling attack

    C. Watercooler attack

    D. Golf course attack

  10. Which of the following is most often used as a term to describe an attack that makes use of a previously unknown vulnerability?

    A. Discovery attack

    B. First use attack

    C. Premier attack

    D. Zero-day attack

  11. Which type of client-side program always runs in a sandbox?

    A. HTML4 control

    B. Visual Basic script

    C. Java applet

    D. Active X control

  12. Which of the following would best describe the purpose of a trusted platform module?

    A. A module that verifies the authenticity of a guest host

    B. The part of the operating system that must be invoked all the time and is referred to as a security kernel

    C. A dedicated microprocessor that offloads cryptographic processing from the CPU while storing cryptographic keys

    D. A computer facility with cryptographic processing power

  13. What is the prime objective of code signing?

    A. To verify the author and integrity of downloadable code that is signed using a private key

    B. To verify the author and integrity of downloadable code that is signed using a public key

    C. To verify the author and integrity of downloadable code that is signed using a symmetric key

    D. To verify the author and integrity of downloadable code that is signed using a master key

  14. Which choice least describes a cloud implementation?

    A. Broadly assessable by numerous networking platforms

    B. Rapid elasticity

    C. On-demand self-service

    D. Inexpensive

  15. Which option is not a cloud deployment model?

    A. Private cloud

    B. Corporate cloud

    C. Community cloud restoration

    D. Public cloud

  16. Which of the following options is not a standard cloud service model?

    A. Help Desk as a Service

    B. Software as a Service

    C. Platform as a service

    D. Infrastructure as a service

  17. Which of the following is the most accurate statement?

    A. Any corporation that has done business in the European Union in excess of five years may apply for the Safe Harbor amendment.

    B. Argentina and Brazil are members of the Asia-Pacific Privacy Pact.

    C. The United States leads the world in privacy legislation.

    D. The European Union's General Data Protection Regulation provides a single set of rules for all member states.

  18. Which of the following most accurately describes eDiscovery:

    A. Any information put on legal hold

    B. A legal tool used to request suspected evidentiary information that may be used in litigation

    C. All information obtained through proper service of the search warrant

    D. Any information owned by an organization with the exception of trade secrets.

  19. Which of the following is most accurate concerning virtualization security?

    A. Only hypervisors can be secured, not the underlying virtual machine.

    B. Virtual machines are only secured by securing the underlying hardware infrastructure.

    C. Virtual machines can be secured as well as the hypervisor and underlying hardware infrastructure.

    D. Virtual machines by nature are always insecure.

  20. Which of the following is most accurate concerning data warehousing and big data architecture?

    A. Data warehouses are used for long-term storage of archive data.

    B. Data is processed using auto-synthesis to enhance processing speed.

    C. Big data is so large that standard relational database management systems do not work. Data must be processed by parallel processors.

    D. Data can never be processed in real time.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.137.7