As an SSCP, you may be called upon to be a member of an incident response team and take an active part in security investigations with regard to intrusion of malware, malicious code, as well as the exfiltration of information from corporate and enterprise networks. It is important for the SSCP candidate to understand the vocabulary used to describe the creation, distribution, and countermeasures related to malicious code.
As a security professional, your job description will undoubtedly include endpoint as well as network security in both working with network equipment and providing assistance in security investigations. Endpoints include any end of network connection that does not pass on or through information to another device. Endpoints include user workstations, network nodes such as printers or scanners or even point-of-sale devices, and portable devices such as tablets, cell phones, and other devices that contain data or are used to communicate between the end-user and the network. Each of these devices requires its own set of security controls that may require the security professional to install, calibrate, update, and monitor.
Big data and data warehouses, as well as virtual environments, each present challenges to the security professional. As an SSCP, you should be familiar with not only the terminology but also the type of attacks and mitigation techniques required in each environment.
Malicious code software is designed and intended to do some harm. You know from the discussion of risk analysis in Chapter 5 that risk is a threat that exploits a vulnerability. NISP Special Publication 800-30 revision 1, as illustrated in Figure 9.1, identified both a threat source and a threat action. Malicious code may be the result of any one of several threat sources:
Each threat source may create a threat action. The threat action is the actual attack or threat that exploits a vulnerability of the target. For instance, a threat action used by a commercial hacker may be to infiltrate the chief executive officer's personal computer and access any information or documents concerning the release of the new product.
Another example of threat sources creating threat actions may be that of a hurricane as a threat source. A hurricane may provide a number of threats such as high winds, flooding, lightning strikes, creation of debris that blocks roads, and disruption of utilities. Each of these threat actions may exploit a vulnerability. High winds may exploit the weak construction of the roof of the building.
Each threat action requires some sort of vehicle and means of delivering the payload to the target. The pathway into a target that is used by an attacker is referred to as a vector. For instance, a hacktivist creates Trojan horse malware that is loaded onto a host computer and installs a key logger as well as a means of dialing home and reporting collecting information. The attack vector was that the Trojan horse was included on a USB drive that was intentionally left on the ground near an individual's parking place and viewed as a freebie by the finder. The finder then plugged the USB drive into their computer, and it immediately uploaded the Trojan horse malware. The hacktivist may have selected several other vectors of attack, which might include network penetration attempts, a watering hole attack, a phishing attack, an insider attack, and a botnet exploitation.
It is important to understand a number of concepts and technical terms relating to malware. The following list includes technical terms and concepts relating to attacks, hacker tools, and types of malware.
…/…/.jpg.exe
could be a double extension filename.Spam is the receipt of unwanted or unsolicited emails. It's not truly a virus or a hoax. It's one of the most annoying things that users encounter. Spam may also create larger problems for the user and a network administrator. Spam might contain links that automatically download malware on a mouse click, or it could contain a link that takes the user to an infected website.
There are many anti-spam applications available, and they can be run by network administrators as well as by end users. As with any filter mechanism, false positives and false negatives are always possible. On occasion, a spam filter will block important messages.
Although spam is a popular term to describe unwanted emails, similar terms have evolved to describe unwanted messages in other mediums. For instance, SPIM (Spam over Instant Messaging) and SPIT (Spam over Internet Telephony) describe specific unwanted messages.
To provide a customized web experience for each visit, a website may insert a cookie in a file on the user's browser. This text file typically contains information about the user, such as client's browsing history, browsing preferences, purchasing preferences, and other personal information. For instance, Amazon.com utilizes cookies placed on a user's computer to record user preferences and browsing habits. When the user returns to Amazon.com, the site reads the cookies and dynamically generates web pages illustrating products that may be of interest to the user.
Cookies are considered a risk because they have the ability to contain personal information about the user that might be exploited by an attacker. Passwords, account numbers, PINs, and other information may be inserted into a cookie.
Although cookies can be turned off or not accepted by a browser, websites have an ability to write a type of cookie that creates persistent data on a user's computer. An Evercookie, created by Samy Kamkar, stores cookie data in several locations the website client can access. Should cookies be cleared by the end user, the data can still be recovered and reused by the website client. Evercookies are written in JavaScript code and are intentionally difficult to delete. They actively “resist” deletion by copying themselves in different forms on the user's machine and resurrect themselves if they notice that some of the copies are missing or expired.
Similar to user preferences recorded in cookies, a device fingerprint (sometimes referred to as a machine fingerprint or browser fingerprint) is a single string of information collected from a remote computing device for the purpose of identification. With the use of client-side scripting languages such as JavaScript, the collection of device parameters is possible. Normally, a script is run on the client machine by a legitimate website or by an attacker wishing to enumerate the device.
If security is of utmost concern, the best protection is to not allow cookies to be accepted. Cookies may be enabled or disabled using any browser options menu. Most browsers allow you to accept or reject all cookies or only those from a specified server. Figure 9.2 illustrates a typical cookie. This screen shows that there are 12 cookies from Google.com. Notice that the selected cookie has various metadata as well as a content field. Applications may read or rewrite any of the information as required. Every browser allows the user to access the cookie file. Notice the X to the right of the cookie. Clicking this allows you to erase the cookie. All cookies can be bulk erased by clicking the Remove All button at the top.
Most of the time the word attachment refers to the file attached to an email document. Attachments may also be included in instant messages (referred to as IMs), text messages, and other types of communication. Attachments offer challenges for the security professional because they can harbor malicious code.
When someone is sending a message with an attachment, their email client marks the attachment with two different labels. These include the data type known as the content type or MIME type and the original filename. The labels instruct the receiver's email client as to what application to use to open the attachment. For instance, graphics data would be displayed as a photograph or picture, an audio or video file would be played in the appropriate media player, and a document such as a Microsoft Word document would be shown as a link that must be clicked to open the document.
Attachments can be dangerous. They can contain an executable program. The executable is interpreted directly by the computer and can possibly install a virus, covertly transfer data to a remote host computer, or even destroy host data entirely. However, executables must be written for a specific type of computer. For instance, an executable written for the Linux operating system will not operate on Windows-based computers. Another risk is an attachment containing a script. A script is an executable file that may be downloaded and may be executed directly by the browser or host computer. Scripts usually are interpreted by other programs such as Adobe Flash Player and Internet Explorer. An example of a malicious scripting virus is the Melissa virus. It was a Visual Basic script that infected the host computer and then mailed itself to the contacts in the host's email address book.
Not all attachments contain dangerous scripts or malicious software. Some attachments contain audio files, media files, static picture files, or other files that might be malformed and, when interpreted by the email client, cause the client or the entire host system to crash.
The following methods can be used to protect against threats contained in attachments:
Numerous techniques are available to detect malicious code and to take corrective and recovery actions upon discovery. These techniques involve software applications as well as hardware appliances that may be used as controls to mitigate the risk of malicious code and malware affecting a network or network node device.
Add-ons are small applications that are downloaded to the client computer during a web session. Because these are executable programs, they have the potential of doing harm to the client computer. Some of these apps, add-ons, or included scripts do harm unintentionally due to poor programming or poor design, while others are malicious add-ons that are designed with the intent of doing harm to the client computer. The following sections describe two general types of web-based add-ons: Java applets and ActiveX controls.
A Java applet is a small, self-contained JavaScript program that is downloaded from the server to a client and then runs in the browser of the client computer. For a Java applet to execute, Java must be enabled on the client browser.
Java applets are small programs that control certain aspects of the client web page environment. They are used extensively in web servers today, and they're becoming one of the most popular tools for website development. They greatly enhance the user experience by controlling various kinds of functionality and visuals presented to the end user. Java-enabled websites running Java web pages accept preprogrammed instructions in the form of JavaScript as they are downloaded from the server and control the actions and presentation in the user environment.
The Java applets must be able to run in a virtual machine on the client browser. By design, Java applets run in a restricted area of memory called a Java sandbox, or sandbox for short. A sandbox is set up anytime it is desired to restrict an application from accessing user areas or system resources. A Java applet that runs in a sandbox is considered safe, meaning it will not attempt to gain access the system resources or interfere with other applications. Potential attacks or programming errors in a Java virtual machine might allow an applet to run outside the sandbox. In this case, the applet is deemed unsafe and may perform malicious operations. The creators of Java applets understood the potential harm that running unchecked scripts on a client computer could cause, which is why they required the Java applets to always execute the Java sandbox.
ActiveX is a technology that was implemented by Microsoft to customize controls, icons, and other features to increases the usability of web-enabled systems. An ActiveX control is similar in operation to a Java applet. ActiveX controls have full access to the Windows operating system unlike Java applets. This gives ActiveX controls much more power than Java applets because they don't have the restriction or requirement of running in a sandbox. ActiveX runs on the client web browser and utilizes an author validation technology using Authenticode certificates. The Authenticode certificates are used by the client as a type of proprietary code-signing methodology. Web browsers are usually configured so that they require confirmation by the user prior to downloading an ActiveX control. Security professionals must be careful to fully train end users in allowing the use of ActiveX on their personal computers. Authenticode certificates that are invalid will provide a pop-up message for the user providing the options as accept and continue or reject the ActiveX plug-in. By choosing the accept option, the installation of malware or other harmful actions may be taken by an ActiveX plug-in.
An organization's endpoint devices, such as desktop workstations, printers, scanners, and servers, as well as personally owned devices such as tablets and laptops, must be configured and used in a secure manner for several reasons. First, it is very likely that sensitive, confidential, or even proprietary information is stored or processed on the workstation. Regulatory agencies or contractual relationships may expose the organization to liability and possibly substantial fines if information is not protected using generally accepted protection methods. The concept of due care dictates that appropriate controls are put into place to adequately protect information and ensure that it is not improperly disclosed. Second, system integrity is critical. End-user workstations must operate as expected, be available for use, utilize applications that are not compromised, and provide processed data that is complete and accurate. If a user's workstation has been compromised, lost, or altered, data may lead to wrong decisions. Loss of workstation availability will most certainly lead to lost productivity.
The security practitioner is absolutely essential in ensuring that the organization's end-user workstations meet minimum standards for connecting to the network as well as incorporating the health and hygiene to provide reliable computing services.
Since the security practitioner is the first line of defense, it is quite common for them to be involved in end-user training and communicating best practices relating to a variety of site work tasks, including protecting passwords, serving websites, and opening email attachments. Users must be made aware of the hazards of downloading software, clicking on phishing links, and performing tasks and activities that place the user workstation as well as the network in jeopardy.
Workstation security may be mandated through IT security policy. Typically, such a policy encompasses other policies, such as workstation policies. Workstation policies may include the activities that are carried out on a workstation, such as file sharing, making backups, managing software patch updates, and other general activities that involve the management of an organization's workstations.
Workstation and application passwords should be changed routinely. A standard within the IT industry specifies an interval for password changes as 60 to 90 days. A number of organizations use a multi-tier password policy that involves changing passwords more frequently for sensitive or confidential information. In some instances, the changing of passwords may be more frequent based upon job roles such as power users or system administrators. General password policy administration should include the following:
Figure 9.5 is a typical Microsoft warning screen advising users of a password change policy. The wording on these types of screens may easily be changed to fit the policy.
A password attack known as a brute-force attack can be automated to attempt thousands or even millions of password combinations. Account lockout is a feature of password security Microsoft has built into Windows 2000 and later versions of the operating system. The feature disables a user account after a preset number of wrong passwords. The number of failed login attempts as well as the period of time to wait before logins may resume may be set by the administrator. The three policy settings are as follows:
The account lockout feature is enabled through the Group Policy Object Editor and uses the relevant Group Policy settings. There are two other special settings:
Malware software is a type of computer software used to prevent, detect, and remove malicious software. Originally named antivirus software because it was developed to control and remove computer viruses, it now has much broader use, providing protection from Trojan horses, worms, adware, and spyware, among many other kinds of malicious software.
The backbone of endpoint security rests upon the prevention, discovery, and remediation of the effects of malware. Every endpoint on the network should have anti-malware protection software installed. Software should be maintained automatically because updates for new viruses are generally made available every week. Anti-malware software should be configured to automatically scan emails, transferred files, and any type of portable USB drive (such as thumb drive or memory stick), CD/DVDs, and software on any other media. Anti-malware should be set to automatically scan the endpoint at least once a week if not more frequently.
Anti-malware software makes use of a number of different methods to identify unwanted software:
The implementation of an antivirus/anti-malware software solution should include the following:
Through an organizational IT security policy, antivirus/anti-malware software must be up and running on all network endpoints, including personal computers, file servers, mail servers, and network attached personal devices.
Most workstations store files on a local hard drive. If a workstation stores files locally that contain critical or sensitive information, those files should be transferred to a server for storage and scheduled backup. The level and frequency of backup required depends on the criticality of the data. This data should be considered in the IT backup policy. To ensure adequate backups of files, they should be copied to a secure server location that is backed up on a regular schedule. Some information within a specific security category or risk level may be regulated by IT security policy and should never be written to local hard drives.
Normally, workstations can be created by using a standard workstation image. But over time, additional software applications and data are added to the system drives. Users should be made aware of techniques to back up applications and data to a central source in the event a workstation should ever be required to be reimaged. The imaging, of course, erases all user data.
If workstations are ever backed up to portable drives, optical DVRs, floppy disks, or any other portable media, security precautions should be considered for the protection of the backup media. Not only should the backup data be kept in a safe place, the backup data should be protected from possible theft and exfiltration.
Anonymous access control is an access control methodology in which the server or the workstation may ask for user identification (username) or authentication information (user password) but no verification is actually performed on the supplied data. In many cases, the user is asked for an email address to simply access the system.
An endpoint or workstation that provides a File Transfer Protocol (FTP) service may provide anonymous FTP access. Users typically log into the service with an anonymous account when prompted for a username. In some applications, such as updating software, downloading information, or sharing files between users, identification of the user is not required.
Anonymous access may also be provided to various wireless network subnets that host guest access, such as in lobbies of buildings. Persons joining this type of wireless network can usually access only a limited amount of information and are totally screened from the internal network.
It is suggested that you do not allow anonymous access of any kind. User authentication should always be used to protect file transfer for any foreign user access to either a workstation endpoint or a server. User access should always be monitored by a network logging mechanism.
Proper endpoint configuration is essential for the security of not only the system but the entire network. Poorly configured devices can cause more harm than a software defect. Every endpoint should be scanned to detect and eliminate inappropriate share permissions, unauthorized software, and excessive user privileges. Weak or missing passwords or services running with executive or administrative privileges may possibly expose the device to a variety of exploits.
Patch management is crucial for the security of the endpoint. Many organizations utilize a centralized patch management system. The system automates patch distribution and installation and logs and verifies that patches have been made and updates have been installed. Patches should always be tested on simulated production systems prior to installation in the network. Users should be advised to leave their workstations powered up on the evening that the patches will be pushed out to the workstations.
All mobile devices issued by the organization should have security patches applied in a timely manner. Users should be directed to make their systems available at a specific place and time so that updates and patches may be installed.
Provision should always be made to back up workstations as well as mobile devices prior to installation of any patches, updates, or system upgrades. And for updates and upgrades, the appropriate procedures should be followed as outlined in the organization's change management policy.
The physical security of valuable assets is always a concern to any organization. Theft of a laptop or PC is always top of mind at the mention of physical security, but theft of information from unprotected workstations and unauthorized access to network wiring cabinets are also possibilities with lax physical security.
Many organizations usually have personnel access layers of security. For example, public to restricted access layers include a public lobby area, restricted work areas, and highly restricted IT areas. Most buildings include restricted badge access or even more stringent policies featuring biometric authentication, security guards, CCTV cameras, and even mantraps.
End users can take various steps to protect the assets of a business, which include both the physical hardware and the data contained on it. During a security awareness class, users should be made aware of the following:
Physical security should be a continuous concern for an organization and should become part of the organization's security culture.
There are several aspects of both physical and logical security in relation to CRT and LCD workstation displays. Of course, the display is the eyes into the data, the application, and the entire network. Care should be taken with information that appears on a workstation display. A security practitioner can utilize the following when securing visual information:
A clean screen/clean desktop policy is a security concept that limits or controls information shown on the display screen. Clean desktop is the display of a minimum number of desktop icons. When the number of icons displayed on the screen is limited, prying eyes are not able to view folder names, document names, or other content displayed on the workstation desktop. A simple layer of defense is created in that an unauthorized individual gaining access to the system would not be able to easily click the desktop icon to gain access to valuable information.
The clean screen concept is a security technique that can be used to immediately blank out a monitor display screen or display dummy information. This is used to immediately hide anything displayed on the screen. There are a number of techniques used to accomplish this that are described on the Internet. One technique is that you can just press the hotkey combination Win+D to immediately return to the desktop. A fake background may be applied to the desktop to appear as an actual working document. To accomplish this, take a screenshot of an application page such as a spreadsheet or word-processing document. Save it as a desktop background. Hide all of the icons on the desktop. Once you use the hotkey to switch, the fake document will appear.
Malware might easily enter any user's workstation with a simple mouse click in an email. As a security practitioner, it may become your responsibility to make your organization's users aware of the following secure email practices and the downside of various email schemes:
Of course, the IT department administration of email can be enforced by IT policy, software settings, and other security controls. Spam filters, data loss protection devices, as well as other protective controls can be put in place to mitigate the problems with endpoint email.
Firewall software should be engaged on all endpoint devices. Windows and Mac OS currently include Host-based firewall protection. Firewalls can be set to specifically block all unused ports on the endpoint device as well as to allow white-list inbound traffic.
A host-based firewall is a piece of software that runs on the endpoint device and restricts incoming and outgoing information for that one endpoint. While network firewalls have much more detailed rule sets, a host-based firewall might simply list items to allow or deny.
Many current antivirus software applications contain a built-in firewall as an added benefit. It is important to read instructions carefully because the operating system firewall, as well as a malware application firewall, might conflict and cause a self-inflicted denial of service. Some firewalls contained in an antivirus software application may suppress website pop-up windows, filter emails, scan USB devices, and provide a variety of other attractive options.
It is always recommended that sensitive or confidential data not be stored on a local workstation or endpoint but be retrieved from a secure server as required. In the event that personally identifiable information (PII), HIPAA data, financial information, or other restricted data must be saved to the user's local hard disk, it should be heavily encrypted. We have all heard on news programs that laptops have been stolen, exposing tens of thousands of accounts with information that was stored locally on the laptop hard drive. In all cases, sensitive information must be encrypted when on a personal computer or personal storage device such as a USB drive or an external hard drive.
All versions of the Microsoft Windows operating system developed for business use feature the Encrypting File System (EFS). The default setting for Windows operating systems is that no files are encrypted. ESF allows users to apply encryption to individual files, directories, or the entire drive. It may be invoked through the use of Group Policy in Windows domain environments. EFS is available only on hard drives and USB drives formatted with Microsoft New Technology Filesystem (NTFS) file format.
Consideration should be given to exactly how to manage encrypted data stored in files and folders on an endpoint system. In most instances, files inherit the security properties of the folder in which they reside. Care should be taken when processing data from encrypted files that the resulting processed data is not placed in a non-encrypted folder, thereby declassifying or unencrypting the information.
Users should never encrypt individual files but should always encrypt folders. Encrypting files consistently at the folder level ensures that files are not unexpectedly decrypted by an application. A best practice is to encrypt the My Documents folder on all endpoint devices so the user's personal folder, where most documents are stored, is encrypted by default.
Commercial software encryption products such as the open-source TruCrypt, Microsofts' BitLocker, or the commercial product Check Point PointSec can be used to encrypt sensitive or protected data.
A Trusted Platform Module (TPM) is dedicated microprocessor that is mounted on a device's main circuit board and serves as a cryptoprocessor. The TPM offers many advantages, such as offloading cryptographic processing of the main CPU to a dedicated microprocessor. TPMs offer many services, such as providing a random number generator and the generation and storage of cryptographic keys. Beginning in 2006, most laptop computers targeted at the business market have been produced with a TPM chip. Use of a Trusted Platform Module has spread to other devices such as cellular phones, personal digital assistants, and even dedicated gaming devices.
The Trusted Computing Group (TGC), an international consortium of approximately 120 companies, has written the original TPM specification and maintains revisions and updates to the standard. The current revision is TPM 2.0, which was released in the fall of 2014.
Trusted Platform modules provide numerous services, some of which are listed here:
All server, desktop, laptop, thin client, tablet, smartphone, personal digital assistant, and mobile phone devices procured for use to support United States Department of Defense (DoD) applications and functions must include a Trusted Platform Module version 1.2 or higher (when available). The intended use for the device is for endpoint identification, authentication, encryption, monitoring and measurement, and device integrity.
Local, non-network storage devices, such as USB flash drives, SD cards, hard disk drives, CDs, and other personal storage devices, are commonly used to store data within an organization. Users of these products are sometimes unaware of the need to ensure the privacy of the information stored on these devices.
Endpoint data sanitization is the process of taking a deliberate action to permanently remove or destroy the data stored on a storage device. A sanitized device has no usable residual data, and even advanced forensic tools should not ever be able recover the erased data. Sanitization makes use of two methods of data destruction:
Users who routinely work with highly sensitive data or regulated data, such as HIPAA-protected data, credit card information, financial information, or other personally identifiable information, should be provided with a method to dispose of or least permanently erase storage devices. Many organizations provide users with a secure drop box into which they place printed materials, drives, CDs, and other media that contains sensitive information. The boxes are collected by a certified document destruction company and shredded to meet various specifications. Over the past several years, the National Institute for Standards and Technology's (NIST) Special Publication 800-88 revision 1, “Guidelines for Media Sanitization,” has become the reference for data erasure compliance.
Many software applications are available for distribution on the Internet, both freeware and commercially available software. Any software obtained through downloading or via DVDs or CDs, from any source, must be considered copyright protected. Even freeware and shareware often require a contractual agreement, which is mandated with commercial software and referred to as an End-User License Agreement (EULA). Unauthorized use of copyrighted software may expose an organization and user to substantial monetary fines. An Authorized Use Policy (AUP) banner should advise end users of the consequences of indiscriminately downloading or installing software on a workstation. Endpoint devices such as end-user workstations should be regularly scanned for unauthorized installed software. Downloading unauthorized software may expose the endpoint device and provide vulnerabilities that attackers can take advantage of. Users should be made aware during security awareness classes that the Internet can be extremely dangerous when it comes to downloading software products; many products seem harmless but contain viruses that can wreak havoc on computers. Some of the most offensive sites are those offering hardware drivers for downloading. These sites appear to be legitimate representatives of the manufacturer and seem to offer authorized drivers and updates for manufacturers' hardware products. The design of these sites makes it easy for the unaware user to become confused and click links that immediately infect a workstation.
Reputable manufacturers will offer a method of authenticating software applications as well as approved patches and upgrades offered for download by verifying the authenticity and integrity of the software files.
Code signing is a method through which the author is authenticated and confirmed as the original provider of the software. Code signing is accomplished by the software author using a cryptographic hash algorithm to process the software code to obtain a hash value or message digest. The hash value is then published on the author's website along with the name of the cryptographic hash algorithm that was used to produce a hash value.
The author then encrypts the hash value with their asymmetric private encryption key and includes the encrypted hash value with the software download. The encryption of the hash value is referred to as a digital signature. Through the use of public key infrastructure (PKI) and the application of digital certificates, it may be proven that the only person capable of signing the code was the owner of the private encryption key.
Once the software is downloaded, the user decrypts the hash value using the author's public key, which is found in the author's digital certificate, and then hashes the software code using the same hashing algorithm to obtain a second hash value. The second hash value is compared to the first hash value, and if they match, the user can be assured of the integrity and authenticity of the software code. Although this sounds like laborious work, it can be automated through most browsers.
Centralized application management (CAM) is becoming popular with the advent of private and commercial cloud systems. Virtual desktop is a method of remotely implementing and administering operating systems as well as applications on a server rather than on the local endpoint device. When this type of client/server model is used, the data is not stored locally, and no applications are executing locally. All end-user keystrokes and mouse clicks are transmitted to the server, and the generated visual display is sent to the endpoint device. By incorporating client software that manages both the virtual desktop session and virtual private network communications onto a USB drive, you are not limiting end users to the location of the endpoint. They simply insert the USB drive into a workstation to log on and create their virtual desktop. The user is no longer constrained by the platform or device they wish to use. Using a virtual desktop data and applications may be made available across any device.
Cloud-based Software as a Service (SaaS) implementations will enhance the adoption rate of subscription-based software models. Subscription-based software models allow for the continuous update and maintenance of application software without the requirement to distribute patches and upgrades to the end-user endpoints. This provides an always up-to-date software application, which may be accessed at anytime and anywhere by the end user.
There are hundreds of cloud-based applications. Microsoft Office 365 is an implementation of a popular business productivity online suite using a cloud-based subscription model. Coupled with the unlimited Microsoft OneDrive cloud storage, users are no longer required to have a software application or data stored locally on an endpoint device.
Lost or stolen devices possibly containing corporate data create a huge risk for an organization. Not only can the data on the device be compromised, but user passwords as well as confidential information might all be stolen and possibly sold. As end users use their personal mobile devices such as tablets, smartphones, and other devices for personal and work-based data storage, the harm to the organization should the device be stolen or lost is greatly increased.
The IT information security policy of an organization must address the use of both organization-issued devices and personal devices used for business. Here are some recommendations:
Along with the recommendations just cited, additional recommendations for organization-issued devices should be taken into consideration:
All portable devices should invoke a locking screen requiring a PIN or fingerprint after a period of inactivity.
It is highly recommended that in the event of a device being stolen or lost, the authorities within the jurisdiction of the suspected theft be notified as soon as possible. Although device locator applications are available for most devices that pinpoint the current location of the device, it is highly recommended by most police organizations to not pursue the stolen item personally but to involve the local officials. This eliminates the likelihood of personal harm during a potential confrontation situation.
Thieves specifically target executive phones or portable devices for sensitive data and restricted information. Sensitive data, especially involving a new product release, merger, and acquisition information, and other organizational financial or proprietary information, may have a sizable street monetary value. Traveling executives should be made aware, through security awareness briefings, that they may be specifically targeted for device theft and to take special precautionary and protective measures.
Endpoints can be hardened by deleting all of the nonrequired services and applications. All network services that are not needed or intended for use should be turned off. While all vulnerable applications should be patched or upgraded, other workstation hardening techniques include eliminating applications or processes that cause unnecessary security risks or are not used. File permissions, shares, passwords, and access rights should be reviewed.
A host-based intrusion detection/prevention system (HIDPS) is software that runs specifically on a host endpoint system. Operating similarly to a network-based NIPS, the software monitors activities internal to the endpoint system. Activities such as examining log files, monitoring program changes, checking data file checksums, and monitoring network connections are among several tasks managed by a HIDPS. Data integrity, logon activities, usage of data files, and any changes to system files can be monitored on a continuous basis.
Host-based intrusion detection prevention systems operate similarly to anti-malware software in that they use signature-based detection (rule based) and statistical anomaly–based detection as well as stateful protocol analysis detection to monitor activities within an endpoint. HIDPS software, similar to the network-based NIDPS, may take predetermined actions based upon the discovery of suspicious activity. This may involve terminating services, closing ports, and initiating log files.
Mobile devices should be protected at all times. It requires comprehensive measures regarding its physical security while also protecting all electronic data residing on it. Several major data breaches have been caused by simply leaving a laptop exposed in a parked car while running into a store.
Mobile Device Management (MDM) is a corporate initiative that manages the growing use of Bring Your Own Device (BYOD) policies in the workplace. It addresses both the requirement of the organization for network security and the protection of corporate information as well as recognizing the desire for the organization's members to use their personal devices in the workplace. Although seen by some as highly restrictive and intrusive, MDM policies prevent the loss of control and impact to organizational assets through data leaks and information exfiltration as well as establishing a baseline for device patching, updates, and OS hygiene.
IT policies involving the security of mobile devices should include steps that may be taken by both the IT department and the end user and include user awareness training concerning risks involved with mobile devices. Mobile device security strategy for organization may involve the following:
Corporate Owned Personally Enabled (COPE) is a program whereby the organization owns and controls the device while the user may use the device for personal purposes as well as business activities. The IT department still maintains control over updates, upgrades, and software installation. This is ideal for network access control program that monitors the health and hygiene of devices connecting to the network.
Being almost the complete opposite of BYOB, the user is not bringing their device to the workplace but instead using a corporate-owned device for their purposes. To use this program, the organization purchases or subsidizes the purchase of a catalog selection of devices and makes them available to organization users. The organization usually participates in the monthly connection cost involving phone, data, and multimedia packages. Through this program, economies of scale can be realized by bulk purchasing devices for distribution while also negotiating connection packages with service providers.
COPE programs are not new. Many organizations have been issuing devices to employees and members for a number of years. COPE programs provide the organization with more oversight and policy enforcement capability, thus reducing some of the risk that comes with BYOD. It is now a viable economic means to take control of the BYOD environment.
If one had to sit back and wonder where the world of computers will be in five years, the answer will emphatically be two words: the cloud. The truth of the matter is that the industry is there now. Cloud computing, as it was originally called, has since been shortened to just “the cloud.” Without a doubt, there remain a fair number of old-school computer folks around who will jokingly point at a USB drive and proclaim to anyone listening, “That's my cloud right there.” Unfortunately, they certainly do not grasp the significance and the magnitude of what currently exists and what is yet to come.
The cloud has appeared at a point in time that is nearly a perfect storm in the world of network computing. There are two major forces at work that together are forcing the adoption of cloud technology on the world stage faster than almost any other computer technology has been adopted in the past. On one hand, corporate and organization networks are growing at unprecedented rates. Corporate data centers are struggling with limited budgets, physical space, time, and manpower, among other factors, just to keep up with end-user demand from corporate departments. Applications and corporate databases are requiring terabytes and petabytes of storage, while the numbers of users requesting access are seemingly growing exponentially, all of the time requiring larger and larger amounts of network resources. On the other hand, financial forces are at work. IT department budgets have continuously been strapped. The saying in those departments has been “do more with less,” while the C-level folks continuously struggle to provide greater returns to investors and shareholders. And, just when we need it, along comes a computing technology that addresses both needs.
The cloud offers a future that promises to address both the requirements of the IT department and the cash flow desires of corporate management. The IT department is thrilled because it no longer must expand and maintain massive amounts of corporate-owned networking resources. The cloud offers an ability to outsource all of the headaches involved with owning networking resources. The equipment no longer needs to be purchased, installed, configured, maintained, upgraded, and operated 24/7 by a small army of personal, all employed by the organization.
The cloud offers all of the resources required, including resources that can be virtually configured as well as expanded and collapsed on demand. No hardware ownership. The C-level folks are ecstatic about the arrangement.
The financial benefits of the cloud will make a substantial impact on the balance sheet as well as the bottom line of the organization. Simply put, purchasing servers, routers, switches, cooling equipment, and other hardware items are classified as a capital expenditure (or simply CapEx) for a business. For accounting purposes, a capital expenditure must be spread out over the useful life of the asset, and this is referred to as depreciation.
The outsourced cloud model is identical to that of a utility company for services such as telephone, electricity, and natural gas. The corporation pays on a regular basis, usually monthly, for the amount of service it uses. For financial purposes, this is referred to as an operational expense, or simply OpEx. And just as with any other utility, the organization pays for only the services that it uses at any given time. To add to the benefit, other expenses related to operating a large organization-owned data center will also be reduced.
So as you can see, the perfect storm is meeting the requirements from the IT department for additional flexible services that it can expand and contract on demand as well as satisfying the requirements from the finance department to switch from financing and depreciating an ever-increasing amount of fixed assets while converting the cost into a simple monthly payment out of operating expenses. Corporations and organizations large and small are rapidly adopting the cloud.
Many technology users today are quite familiar with storing data on individual public cloud systems. Google, Amazon, Microsoft, and Dropbox are just a few of the hundreds of service providers vying for the attention of the technology-savvy user community.
While millions of users have registered for free or subscription-based service accounts, hundreds of thousands of organizations, small and large, are already making use of cloud provider offerings.
Cloud computing may be readily accessed by any type of communication-enabled device or system. Traditional end-user devices such as smartphones, tablets, personal digital assistants (PDA), pagers, laptops, desktops, and mainframes are included on the top of the list of cloud access client devices. Although the first thing that comes to mind are those devices that we touch or type on, currently tens of thousands and in the near future millions of devices will directly access the cloud automatically.
For instance, some types of intrusion prevention systems and IT network appliances right now receive automatic updates from the cloud while transmitting information such as network attack signatures and profiles, including potential zero-day attacks. This attack data is automatically analyzed and immediately shared with other cloud-based devices within the same product group to inform and protect other networks with similar devices installed.
Cloud computing may be viewed as a huge data collection repository that monitors and collects information from sensors, industrial control devices, automobiles, and other mobile equipment such trucks, tractors, forklifts, boats, ships, and construction equipment as well as virtually any conceivable type of device with communication capability. Although the cloud may be a data collection repository, it may also be a collection of applications that download information, instructions, activity updates, and other operational reference data to connected devices.
By definition, the cloud is a collection of computer network hardware, such as servers, switches, routers, and other traditional networking equipment. These items operate in buildings that may be located across town, within the state, in several states away, or even in a totally different country. In the case of a private cloud, one that is privately owned and operated just for the organization, the cloud could actually be down the hall or within a data center in the same building.
Connecting to the cloud may be accomplished by a wide range of methods. Although most users communicate with the cloud through a personally owned device such as a laptop, tablet, or smartphone.
Although traditional methods of connecting to the Internet currently exist, other direct cloud connection methodologies are in place and used by a variety of products and devices. This includes satellite links directly to a cloud facility, hardwired packet-switching networks directly connected to a cloud facility, and other methods that may include radio transceivers.
Hybrid communication technology can aggregate communications within a geographic location. For instance, consider that your house is a cloud-connected entity. Imagine that you have network attached sensing devices installed around your home. There is an embedded controller in each item referred to as a dedicated computing device (DCD). Each device includes a processor, volatile and non-volatile memory, a variety of sensors and controllers, and a wireless transceiver, either Wi-Fi or Bluetooth. Each device may be specifically programmed to carry out a dedicated function, such as control something, report on something, or monitor something, and take a specific action upon an event occurring. Such items, as listed here, may be found around your home in the near future:
And the list continues.
Each of the items in the preceding list represents both controllers and sensors dedicated to a specific purpose, with some of these devices being actually smaller than a key cap on a standard keyboard or possibly no larger than an aspirin. For instance, the vehicle sensor might report on the level of gas in the tank, air in the tires, or even a stored calendar of who's to use it next and their planned destination. The aquarium sensor may be a floating device that continually tests and reports on the temperature and quality of the water. The outdoor lawn and plant sensor may send continuous readings of soil moisture, nitrogen content, available sunlight, and other health and wellness parameters. Air-conditioning/heating unit sensors forward data on the outside compressor as well as the in-attic condenser and heating unit.
It is interesting to consider the future of “wearables.” Such devices will include sensors and devices manufactured into your shoes and clothing, strapped onto your body, and perhaps embedded on your body that will provide various capabilities such as parameter monitoring and functional control. Yes, pacemakers today are embedded devices that may report on certain conditions and have their parameters adjusted wirelessly by medical professionals.
Imagine that in the near future your third-grade daughter comes home from school early and reports that she feels sick. You pop open a bottle of sensors and she swallows one with a little water. Virtually immediately, you begin receiving diagnostic information on your cell phone measuring dozens of parameters within her body. The cell phone application automatically downloads cloud-based data coinciding with the parameter measurements of the ingested pill and recommends health and remedy methods for you to take.
All of these devices must communicate by some method. Not all sensors and controllers have the power to communicate directly to the Internet. Imagine a home-based proxy device that communicates by Bluetooth over short distances within the house and then converts the signals into Wi-Fi or cellular radio format for forwarding to the Internet. This may be simply a dedicated black box device that simply plugs into a wall outlet.
As you can see, all of these devices and capabilities will be available in a very short period of time. The challenge will be securing these devices with cloud security and encryption methods to mitigate the problems of intrusion and attack. Although you might be able to live with an attack on your swimming pool sensor, you might not if the attack is on your pacemaker. This illustrates the seriousness and requirements concerning cloud security.
The computing industry continuously seeks a common source of reference for concepts and terminology to better understand and communicate technology concepts. The National Institute of Standards and Technology (NIST) through NIST Special Publication 800-145, “The NIST Definition of Cloud Computing,” lays out some fundamental cloud-defining concepts.
The NIST special publication defines some broad categories that are characteristics of clouds:
As a security practitioner, it will become abundantly clear that every benefit has a reciprocal potential of a security concern. The security practitioner should be concerned about the following initial broad cloud-based security issues:
Clouds may be accessed from a broad number of devices. This greatly expands the requirements for cloud-based access control as well as remote authentication techniques. Since clouds may be accessed from devices directly without them being connected to the organization's LAN, proper access and authentication controls must be provided at the cloud edge rather than in the business network. These controls must also be able to provide correct authentication across a number of personally owned devices and a variety of platforms.
Cloud providers are in the business of selling services. They will gladly grant additional capabilities upon request. The organization must put into place corporate policies and organizational structures such as a change control board, a cloud services request control board, or a services request procedure that controls the allocation of cloud services to requesting corporate individuals, departments, and entities. This mitigates the risk of individual departments requesting additional services directly from the cloud service provider without prior authorization.
Any IT department that has managed a corporate “shared drive” knows how these can fill up very quickly. Also, it is usually very difficult to determine the owner of files and information stored on the shared drive. It's not unusual to find, for example, a series of PowerPoint presentations from 2002 with no information about whether they can be deleted or not. The same is absolutely true of cloud-based storage. Once storage space begins to expand, it is very difficult to contract it. Unlike the simple shared drive on an in-house network, cloud-based storage containing the same information will be incrementally more expensive.
Cloud providers have an ability to pool resources as previously discussed. Some very serious security implications exist with this concept. First, cloud resources are shared among a huge number of tenants. This means that other users are on the same server equipment at the same time. The possibility that data may be written into another tenant's area exists. Second, in the event of another tenant conducting illegal activities, the entire server might be seized along with your data. Third, unscrupulous cloud provider personnel may access and exfiltrate your data. Fourth, investigations are complicated through the jurisdictional location of data on cloud service provider equipment. Five, information security is totally within the control of the service provider. If penetrated, corporate information, plus the information of many other clients, may be compromised. These are just a few of the security concerns for the security practitioner concerning cloud services.
It's not unusual in some countries to run a wire over to a neighbor's electrical meter. In fact, even in our country, theft of cable television services was quite a fad years ago. Theft of any measured service is possible if the attacker is determined. It is incumbent upon the cloud client to thoroughly check billing statements against authorized and requested services to mitigate the possibility of service theft.
There are a variety of types of cloud services. Generally, they are classified by who owns the equipment upon which the cloud services are running and who are the potential clients that might use the services of the cloud provider. Again, NIST Special Publication 800-145 offers a list of four cloud deployment models. These cloud models are described in the following sections.
A private cloud is typically a cloud constructed as a part of the organization's existing network infrastructure. This allows an organization to provide cloud services complete with expansion and elasticity components for internal clients. This type of file is best utilized in the following cases:
A private cloud may take the form of an intranet or an extranet with the added benefits of virtual allocation of assets and elasticity of storage.
Private clouds, having been created internally with organization-owned networking equipment, maintain the same security risks associated with any internal network. The organization must still provide the proper network safeguards, including intruder prevention controls and network access controls, as well as maintain all equipment with proper patching and updates.
Community clouds can be established to provide cloud services to a group of users that can be defined as users requiring access to the same information to be used for a similar purpose. For instance, rather than set up an FTP site to transfer information between departments or external users, it is quite possible to set up a community cloud where everybody can access and share the same information. Using a specialized application, document versioning, access control, data file lockout, and user rights may be enforced. With a community cloud, it's easier to train inexperienced users in access control techniques based upon browser access rather than assign FTP accounts and educate users on the use of file transfer protocol techniques.
A community cloud may be as simple as a Dropbox location from which attendees to a family reunion may easily access all of the photographs that were taken. It's actually simpler to implement and explain the community cloud to inexperienced users than to explain the use of Facebook.
Security problems exist in a community cloud. Management of access control becomes a paramount interest. Authentication techniques as well as encryption key distribution become major factors between non-related third-party entities.
The public cloud is probably the most common cloud platform. Public cloud services are offered by such providers as Microsoft, Google, Apple, and Amazon, among many others. The public cloud service is the easiest cloud offering for any individual to utilize. The most well-known have brand names such as SkyDrive, OneDrive, Amazon Web Services, iCloud, Google Drive, Carbonite, and Dropbox, and there are many others. The beginning monthly charge is usually less than $20, and they provide from 5GB to 500GB of storage. Most providers offer low-cost or no-cost entry-level services and escalate from there. Public clouds are easy to set up and easy to maintain.
Security problems exist in a public cloud environment. In the event that a public cloud is compromised, passwords, access control, and proprietary information may be exposed. The clients are purely at the mercy of the public cloud providers to provide adequate security.
Hybrid cloud structures consist of combining two forms of cloud deployments. Here are two examples:
Hybrid clouds offer a great degree of flexibility to an organization that require cloud-based services but wish to capitalize on the cost savings afforded by cloud services providers. But, hybrid cloud security compounds the security threats that are of concern in a private cloud as well as the other cloud model that is included.
Although there appear to be a number of “ _____ as a Service” offerings on the market, NIST Special Publication 800-145 offers a list of three cloud service models, which are described in the following sections.
The Software as a Service (SaaS) model only allows the user or client access to an application that is hosted in the cloud. Such applications run on a cloud provider's equipment, and the SaaS provider manages all hardware infrastructure and security. Access is usually provided by identification authentication and is based upon browser-based interfaces so that users can easily access and customize the services. Each organization serviced by the SaaS provider is referred to as a tenant. The user or customer is the individual or company licensed to access the application.
There are several delivery models currently in use within SaaS:
The Platform as a Service (PaaS) service delivery model allows a customer to rent virtualized servers and associated services used to run existing applications or to design, develop, test, deploy, and host applications. PaaS is delivered as an integrated computing platform, which may be used to develop software applications. Software development firms utilize PaaS providers to provide a number of application services such as source code control, software versioning tools, and application development process management tools. PaaS is used to build, test, and run applications on a cloud service provider's equipment rather than locally on user-owned servers.
The Infrastructure as a Service (IaaS) service delivery model allows a customer to rent hardware, storage devices, servers, network components, and data center space on a pay-as-you-go basis. The cloud service provider maintains the facilities, infrastructure updates and maintenance, and network security controls. Although IaaS is the primary cloud service model, two types of specialized infrastructure service models exist:
Cloud management security encompasses the policies, standards, and procedures involved with the transfer, storage, and retrieval of information to and from a cloud environment. As with any network, there are four primary activities:
Specific threats and vulnerabilities may be identified and addressed with the proper use of security controls.
Executing cloud-based applications involves user interaction with an application through an application programming interface (API) or virtual desktop environment. The cloud-based application is stored in cloud storage and executes on remote cloud service provider equipment. Cloud-based application execution provides several concerns for the security professional:
Data processing in the cloud is referred to as the compute function. Transactional data processing represents computations performed on a continuous stream of data sourced from such devices as cash registers and point-of-sale devices and input from business operations.
Executing a compute function in the cloud as part of a cloud-based application is identical to executing the exact same data processing function using a local-based application in an organization-owned data center. A cloud-based application vulnerability is a flaw or weakness in an application. Application design vulnerabilities are primary weaknesses that can be experienced regardless of location. There are a wide variety of hacker tools and penetration techniques used for exploiting application vulnerabilities. Here are a handful that are much more common than others:
Security professionals should consider all aspects of application vulnerability mitigation when executing applications on cloud-based servers. A cloud-based application strategy should include the following:
Clear data security strategies such as performing risk analysis that includes the Three Ps of data processing within an application:
Businesses have ported data from one location to another for a long time. Bulk data transfer is as old as early mainframe computers. The foundation of data transfer has been the AIC security triad of integrity, availability, and confidentiality. Of interest to organizations that move large amounts of data are nonrepudiation, end-to-end security, and auditability as well as providing effective continuous monitoring that ensure accurate performance metrics.
Managed file transfer (MFT) is the transfer of data to, from, and between clouds securely and reliably regardless of data file size. It has recently found favor among cloud service providers as well as third-party data transmission providers. Managed file transfer is a technique of transferring data between sites or within an organization's networking environment utilizing best practices.
There are three specific areas of MFT:
Data stored in the cloud is stored on the cloud service provider's equipment. Data might be stored in large logical pools where the actual physical storage devices may span storage devices in several data centers, which are often geographically separated. Cloud storage devices range from traditional hard disks (HDDs) to solid-state drives (SSDs), which are referred to as tiered storage. Cloud providers often mix different types of data storage technology usually tiered from slow to fast storage. For instance, a balance of performance may be accomplished by placing directories, cross-reference tables, and other data that must be processed extremely fast in solid-state drives and placing remaining data in high-capacity hard disk drives while data rarely used may be stored in less expensive slower hard disk drives.
Data redundancy is often provided by the cloud provider and refers to storing multiple copies of data across numerous servers in geographically separated locations. The cloud service provider is responsible for maintenance of equipment, integrity of the data in storage, and protection of data when moved between service provider locations.
Data stored in a cloud environment sometimes requires specialized handling due to the fact that they may be virtually stored in a number of different locations. A storage mechanism such as erasure coding may be employed to supply data redundancy in the event of error or loss, while cloud-based data encryption may be based upon the nature of the data or encryption services provided by the cloud service provider.
Erasure codes, in actual practice, may be compared to RAID data reconstruction technique because of the ability to reconstruct data using codes. Erasure codes can be more CPU intensive and require more processing overhead. But, this type of failsafe coding can be used with very large “data chunks” and works well with data warehousing as well as with big data applications.
As the growth of the cloud continues, several major concepts are on a collision course. The cloud is a seeming panacea of everything good about computing. The capability for expansion, simplicity of configuration, ease of virtualization, and the total demand-as-you-need mentality is extremely attractive to individual users as well as major organizations.
Although some cloud services seem expensive today, as with any competitive utility model, the marketplace will bring prices down to acceptable levels. The advantages of the use case models as well as the economics of the pricing models will only serve to drive more and more users to the cloud.
With all of these conveniences, the challenge and complexity of complying with legislation, regulations, and laws issued by countries worldwide will become an ever-increasing challenge for cloud providers as well as cloud users. Accomplishing adherence, compliance, or conformity to international laws on a global basis will become extremely important as the use of the cloud continues to expand.
As with any regulatory and legal environment, legislation, regulations, and laws will continue to change as legislators and regulators take actions. As always, when dealing with legal, compliance, and regulatory issues, the best advice to security professionals is to consult with relevant and knowledgeable professionals who specialize in legal cloud issues.
Borderless computing is defined as a globalized service that is widely assessable with no preconceived borders. The main concepts of data ownership and personal privacy are paramount when considering the transborder data storage and processing capability the cloud offers. Cloud provider offerings and the relevant contractual relationships will often refer to availability zones, usually on a global nature, which subjectively are areas of operation, data storage, data processing, and other activities carried out by a cloud provider. These availability zones define geographic areas such as North American zone/South American zone, which are sometimes combined and just referred to as “the Americas.” Other zones include the Asia-Pacific zone and EMEA zone, which refers to Europe, the Middle East and Africa. Although these are somewhat standardized business operational territories, they become extremely large and unwieldy when you take into account transborder data flows, in-country data storage, and local data processing when related to complying with numerous country laws and regulations. Many cloud providers often segment the traditionally large business zones into smaller cloud operational zones based on geography or similarity of legislation, such as that of the European Union (EU).
Various international laws and regulations typically specify responsibility and accountability for the protection of information. Accountability for the adherence to these laws and regulations remains with the owner of the information. Therefore, corporate entities and organizations that utilize locally based services must impose the same accountability requirements upon the cloud service provider through contractual clauses specifying compliance requirements.
Personal data is defined as any data relating to a natural human being referred to as a data subject. An identifiable human being is a person who can be identified, directly or indirectly, by one or more factors specific to their physical, physiological, mental, economic, cultural, or social identity.
Numerous legal issues and requirements become relevant when personally identifiable data is collected, processed, transmitted, and stored in global-based cloud environments.
A number of legal requirements and compliance issues are of concern to data owners as well as cloud service providers.
Numerous laws and regulations have been enacted through the years to address requirements of personal privacy protection and accountability of the holders and processors of data and to provide transparency to operations.
The original term in the prior regulation was the so-called right to be forgotten. Although a noble concept and idea, it was vaguely worded, and it offered no means or methods for enforcement. It has been replaced in the GDPR by a better defined right to erasure, which provides the data subject with certain capabilities and means to request elimination of personal data based upon a number of explicit grounds. This allows for the exercise of the fundamental rights and freedoms of the data owner over the desires of the data controller.
Fundamental concepts such as privacy by design as well as data protection impact assessments (PDIAs) built in the requirements that privacy be a default operational scenario and impact assessments must be conducted identifying the specific risks that might occur to the rights and freedoms of data subjects.
A safe harbor provision is a provision of a statute or a regulation that specifies that certain conduct will be deemed not to violate a given rule. Applied to U.S. companies operating within the confines of the original EU Directive 95/46 EC, the Safe Harbor Privacy Principles allows U.S. companies to opt in to the EU privacy program and to register their certification if they meet the European Union requirements in the storage and transfer of data. The safe harbor program was developed by the U.S. Department of Commerce; it covers the U.S. organizations that transfer personal data out of the geographic territory of the 28 European Union member states referred to as a European Economic Area (EEA). Organizations utilizing safe harbor privacy principles may also incorporate information usage clauses within contractual agreements concerning the transfer of data.
ECPA describes the conditions under which the government is able to access data and can help Internet service providers disclose private content stored by a customer.
Discovery is a legal technique used to acquire knowledge required in the prosecution or defense of lawsuits or used in any part of litigation. eDiscovery specifically describes electronic information that is stored in or transferred over network storage devices, cloud systems, and other information repositories. The acquisition of such data is based on local rules and agreed-upon processes usually sanctioned by courts with jurisdiction; attorneys for either side request access to various data and are allowed access to data based upon court-established criteria. Attorneys may place a legal hold on data that is to be collected and analyzed as potential evidence in litigation processes. The Federal Rules of Civil Procedure, made effective in 2006 and 2007, substantially enhanced the proper retention and management of electronically stored information and compelled civil litigants to comply with proper handling techniques or be exposed to substantial sanctions. In the event of mishandling of electronically stored information, a finding of spoliation of evidence may be handed down by the court.
The following types of data are defined as electronically stored data:
Cloud-based data storage provides challenges for the security professional as well as attorneys in identifying, locating, and obtaining information that may be subject to eDiscovery. Although the Federal Rules of Civil Procedure (FRCP) in the United States require parties to litigation to be able to produce requested electronically stored data that is in their custody, possession, and control, there is much debate concerning is the custody, possession, and control at the cloud provider or the information owner level.
Location is also an extraordinary problem in the production of requested electronically stored information in the cloud. Cloud information can be stored in a number of country jurisdictions and, due to failover or load-balancing activities, may be transferred between countries and jurisdictions frequently and at will. Obtaining specifically requested information from the cloud provider may be difficult under current laws, regulations, and contractual responsibilities.
Contracts with cloud service providers should also include that the service provider is required to inform the information owner in the event of any court-ordered legal action with regards to their data stored by the provider.
Virtualization is the essential technology provided for loud-based implementation of services. Through virtualization, virtual machines are separated from the underlying physical machines. In a virtual environment, the virtual machine is developed under a hypervisor and utilizes the underlying physical hardware. The virtual machine that was created is referred to as a guest machine while the physical server is referred to as the host machine. A hypervisor controls all of the interactions between the virtual machine and the host machine physical assets such as RAM, CPU, and secondary storage.
Virtual technology may be attacked just as with any the other technology. A benefit of a virtual machine is that it can be taken down and immediately re-created in the event of a penetration or attack. The virtual environment separates the attacker from the underlying hardware, although it is possible for a dedicated experienced hacker to successfully attack the root and attack the hypervisor. Once in the hypervisor, the attacker has access to all of the virtual machines controlled by that hypervisor.
Security, as applied to virtualization, includes a collection of controls, procedures, and processes that ensure the availability, integrity, and confidentiality of the virtualization environment. Security controls can be implemented at various levels within the virtual environment. Controls can be implemented directly on a virtual machine or address vulnerabilities in the underlying physical device.
Big data refers to data sets that are so large and complex that through predictive analytics or other advanced methods to extract value from data, new correlations in an effort to spot business trends, prevent diseases, combat crime, promote solutions for complex physics simulations, and provide for biological and environmental research are made available. Data sets grow in size due to the ever-growing availability of cheap and numerous information sensing and gathering technology.
It is predicted that, as of 2014, in excess of 667 exabytes of data pass through the Internet annually. The retail, financial, government, and manufacturing sectors all maintain data repositories that are continuously analyzed to spot trends for the exploitation of marketing and other opportunities. For instance, it is reported that Walmart records in excess of 1 million customer transactions every hour, which are forwarded to databases estimated to contain more than 3.1 petabytes of data. This data is continuously analyzed to spot trends, outages, buying preferences, and other pertinent information on a worldwide basis. Big data is also used for climate change simulations, predicting financial markets as well as for military and science applications.
The data sets are so large that it is not possible to analyze the data on standard PCs or even mainframe computers. Standard relational database management systems, as well as predictive analytics and data visualization applications, are not robust enough to handle data analysis. The analysis instead requires hundreds or even thousands of servers running in parallel to perform the required processing. It is growing so fast that what was defined as big data only a few years ago is commonplace today. It is growing so large and so fast that analytic applications are having difficulty keeping up.
Big data has been described as having three vectors: the amount of data, the speed of data in and out of the database, and the range of data types and sources. In 2012, Gartner, Inc., updated a cloud definition as follows: “Big data is high volume, high velocity, and/or high variety information assets that require new forms of processing to enable enhanced decision making, insight discovery and process optimization.” The data analysis requires new forms of integration to uncover large hidden values from large data sets that are diverse, complex, and of a massive scale.
Big data requires exceptional technologies and the use of a virtual architecture as an option for dealing with the immense amount of data gathered and processed within a system. Multiple processing units designed as distributed parallel architecture frameworks distribute data across thousands of processors to provide much faster throughput and increased processing speeds.
MapReduce is a parallel processing model and application used to process huge amounts of data and is used as a method of distributing database queries across hundreds or thousands of machines, which then process the data in parallel. The results of these database queries are analyzed, gathered, and delivered. The MapReduce framework is an application that distributes the workload of processing the data across a large number of virtual machines. The Hadoop Framework is a big data file system that is referred to as the Hadoop Distributed File System that stores and maintains data in a very large format system. The MapReduce application uses the Hadoop Distributed File System.
A large number of companies, including eBay, Amazon, and Facebook, make use of the data and data warehouses to analyze consumer purchasing and merchandising trends. It is been estimated that the volume of business data worldwide doubles in just over 14 months.
Big data makes use of a distributed processing architecture featuring tens of thousands of server clusters working in parallel. With the data volume, data velocity, and sheer magnitude of data storage and transport in a parallel processing environment that is web-based, securing the systems is much more difficult.
The immensity of the data and requirement for parallel processing increases the magnitude of encryption as well.
Encrypted search and cluster formation in big data was demonstrated in March 2014. Encryption speed and system resilience are the primary focus of the data encryption goals. Researchers are proposing an approach for identifying the encoding techniques to be used toward an expedited search without decoding and re-encoding data during the search process. This is leads to the advancement toward eventual security enhancements in data security.
The foundations of security are defined as availability, integrity, and confidentiality, and each is pushed to an absolute limit with the concepts of big data. Access control requires that a data owner classify data according to some criteria and provide access control decisions. The data represented in these monumental structures is so large and comes from such a variety of locations and sources that is very difficult to have one person or even a team of people assuming the traditional ownership role of the data when it is in the matrix.
The handling of big data creates challenges for many organizations. They must first determine data ownership and then determine the attributes that should be assigned to the data. One approach is to assign data ownership to the data outputs of search and analysis operations, which may be much more manageable than attempting to classify the entire database.
New terminology may appear where the data owner controls the data on the input side and the information owner assumes control of the data on the data output side. Two types of data ownership are surfacing out of this concept. The data owner places the data into the database, classifies the data, and determines access criteria. The information owner, on the other hand, owns and classifies information after a resulting process such as a search, sort, or analysis procedure.
Virtual environments offer the capability of running numerous virtual machines on a single underlying server, thus taking advantage of economies of scale and enabling the most efficient use of the underlying hardware. Virtualization increases resource flexibility and utilization and reduces infrastructure costs and overhead.
As you saw in Chapter 8, to date, conventional network design has developed from a hierarchal structure such as tree, ring, star, and other network topologies built upon physical network connection devices such as servers, routers, and switches. In the world of client/server computing, the static design was suitable because once the client was connected to a server, the hardware routing did not change. With the dynamic computing requirements and increasing storage needs of today's enterprise data centers, designers have sought a much more flexible, dynamic means of interconnecting devices. These revolutionary design changes have been prompted by several environmental changes within the IT industry as well as resulting from business demand. Some of these changes are listed here:
North/South is a classic data path in most enterprise networking environments. It refers to the standard data channel between a lower-level client computer and a higher-level server computer. As data centers have grown and the demand for higher-level applications that access multiple databases on many different servers has increased, a single request from a client computer may trigger a large amount of communications horizontally between a number of communicating servers containing databases and other components required for the communication. This data path is referred to as East/West, which describes a horizontal or machine-to-machine data path. Today's data processing environment requires a dynamic capability to set up data paths as required for the task at hand. Hardwired static data paths may no longer suffice.
Software-defined networking (SDN) is a virtualization methodology in which the actual data flow across the network is separated from the underlying hardware infrastructure. This allows networks to be defined almost instantaneously in response to consumer requirements. This is accomplished by combining common centralized IT policies into a centralized platform that can control the automation of provisioning and configuration across the entire IT infrastructure. Network configurations can be configured within minutes using virtualized technology rather than spending hours or days rewiring and reconfiguring hardware-based data centers.
There are a number of software-defined network application providers offering a wide selection of products. To date, no one standardized application is surfacing as a clear leader. There are several commonalities between all of the SDN models:
Network administrators make use of preconfigured virtual machine images sometimes referred to as snapshots that are ready to deploy on a hypervisor. A virtual appliance is a virtual machine image that is preconfigured and includes preloaded software and is ready to run in a hypervisor environment. The virtual appliance is intended to eliminate the installation, configuration, and maintenance costs associated with running complex virtual environments by preconfiguring ready-to-use virtual machine images.
There is a difference between virtual machines and virtual appliances. A virtual machine is a generic, stand-alone virtualized platform that consists of a CPU, primary storage (RAM), secondary storage (hard disk), and network connectivity. A virtual appliance is in fact a virtual machine by definition, but it also contains the operating system as well as software applications that are preconfigured and ready to run as soon as the virtual appliance image is in place. The major difference between a virtual machine and a virtual appliance is the configuration of the operating system. Most virtual machines use a standard preconfigured image of an operating system that includes all of the latest patches and upgrades, whether it be Windows, Linux, or another OS. In a virtual appliance, the operating system may be customized for the specific application and environment in which the appliance will be performing. This stripped-down or specifically configured operating system is referred to as just enough operating system (JeOS), which is pronounced “juice.”
A host machine provides the underlying hardware upon which virtual machines are running. A single host may run a very large number of virtual machines all sharing the same CPU, RAM, hard drive, and network communications capability. Host clustering is a method whereby numbers of host machines may be logically or physically connected so that all of their resources (such as CPU, RAM, hard drive, and network communications capability) can be shared among all of the hosted virtual machines. Host clustering provides the ability to expand and demand more attributes as a workload increases as well as provide the safety failover to another host in the event one host fails.
Within host clustering, all of the resources for all of the host computers are managed as if they were one large machine and are made available to the virtual machines that are members of the cluster. With a large number of cluster members (virtual machines) in contention for the limited resources of the host cluster, various resource sharing concepts are used to define the allocation of resources. The cluster administrator may define the requirements and allocations for each member of the cluster using the following techniques:
Storage clustering is the use of several storage servers managed and interconnected together to increase performance, capacity, or reliability. Storage clustering distributes workloads to each storage server and manages access to all files and data stores from any server regardless of the physical location of the files and data stores. Storage clustering should have the ability to meet the required service levels (specified in SLAs), keep all client data separate, and adequately safeguard and protect the stored data. The benefits of virtual storage clustering are as follows:
Two basic types of storage clustering architectures exist:
Attacks and countermeasures used in virtualized environments are very similar in nature to any attack and countermeasure utilized in a traditional hardwired network. Appropriate risk assessments should be made to identify probable threats and vulnerabilities within the environment. Organizations should conduct regular tests and assessments as well as provide for continuous monitoring and logging to identify the effectiveness of security controls and countermeasures.
The virtualization of various network components offers many advantages over using static network devices:
The benefits of server virtualization include better resource allocation as well as the ability to revert to an earlier snapshot or operating condition in the event of a problem or attack. Desktop virtualization referred to as virtual desktop infrastructure (VDI) allows more flexibility to push out the correct image to a user device depending upon the required format of the device.
Substantial security challenges surface with the use of virtualization:
As a security practitioner, you will be closely involved with various aspects of incident handling involving the penetration of malicious code and malicious software. It is important to have an understanding of the different types of malicious code and the factors upon which it may arrive at the network or host location. Knowledge of the various countermeasures, including both software and hardware appliances, is required. Proper maintenance, patching, and upgrading is required of all software and hardware. It is the responsibility of the security professional to upgrade anti-malware software and devices to maintain an adequate operational base level. All patches and upgrades should be tested in offline systems that are similar to production-grade systems prior to pushing out to production.
Endpoint devices include not only host computers but also portable devices and network nodes such as printers or scanners. Each device requires proper protection mechanisms and controls to ensure that it is adequately protected from attack.
The cloud is certainly a big buzzword in IT circles. But with the cloud comes extensive security requirements. Data must be secured in transit to and from the cloud servers and encrypted at rest in the cloud location. It is important to understand encryption mechanisms as well as storage and retrieval techniques used in cloud storage. Multi-tenancy refers to the practice of placing multiple tenants on cloud-based servers. This technique of dynamically sharing hardware can cause a number of security challenges.
Along with cloud comes the concept of big data. Big data and data warehousing are techniques of amassing huge amounts of data that may bridge across a number of storage locations. This amount of data is so large that specialized techniques of processing the data in parallel have been developed. Data ownership, classification, integrity, and confidentiality are all challenges facing the security practitioner.
The cloud is made up of virtual environments, which make the best use of hardware in an organization-owned data center. The software-defined network creates the concept of virtual machines and virtual storage that run on top of physical machines. These machines may be placed into operation and taken down very rapidly. Although a virtual server runs in a virtual environment, it is not without security concerns. Virtual machine escape is an attack whereby the attacker literally jumps out or “escapes” from the virtual machine and obtains control of the hypervisor or another virtual machine. With hypervisor privileges, the attacker has complete control of all of the other virtual machines as well as the underlying hardware infrastructure.
Of all of the ideas covered in this book, the concept of the cloud as well as virtual environments is among the most important for the SSCP to learn for the future. The cloud may definitely guide the future of information technology.
You can find the answers in Appendix A.
You can find the answers in Appendix B.
A. Hacker
B. Nation state
C. Cracker
D. Script kiddie
A. Cracker
B. Nation state
C. Anarchist
D. Hacktivist
A. It divides itself into many small pieces inside a PC.
B. It always attacks an email contacts list.
C. It requires an outside action in order to replicate.
D. It replicates without assistance.
A. A young unskilled hacker
B. A young inexperienced hacker
C. A hacker that uses scripts or tools to create a text
D. A highly skilled attacker
A. A threat vector
B. A threat source location
C. The threat action effect
D. A threat vehicle
A. Malware that logs keystrokes
B. A member of a botnet
C. A tool used to achieve privilege escalation
D. A type of root kit
A. Advanced malware attack by a persistent hacker
B. A malware attack by a nation state
C. An advanced threat the continuously causes havoc
D. Malware that persistently moves from one place to another
A. A virus designed several years ago
B. A virus that attacks anti-malware software
C. A virus that uses tried-and-true older techniques to achieve a purpose
D. A mobile virus that attacks older phones
A. Phishing attack
B. Whaling attack
C. Watercooler attack
D. Golf course attack
A. Discovery attack
B. First use attack
C. Premier attack
D. Zero-day attack
A. HTML4 control
B. Visual Basic script
C. Java applet
D. Active X control
A. A module that verifies the authenticity of a guest host
B. The part of the operating system that must be invoked all the time and is referred to as a security kernel
C. A dedicated microprocessor that offloads cryptographic processing from the CPU while storing cryptographic keys
D. A computer facility with cryptographic processing power
A. To verify the author and integrity of downloadable code that is signed using a private key
B. To verify the author and integrity of downloadable code that is signed using a public key
C. To verify the author and integrity of downloadable code that is signed using a symmetric key
D. To verify the author and integrity of downloadable code that is signed using a master key
A. Broadly assessable by numerous networking platforms
B. Rapid elasticity
C. On-demand self-service
D. Inexpensive
A. Private cloud
B. Corporate cloud
C. Community cloud restoration
D. Public cloud
A. Help Desk as a Service
B. Software as a Service
C. Platform as a service
D. Infrastructure as a service
A. Any corporation that has done business in the European Union in excess of five years may apply for the Safe Harbor amendment.
B. Argentina and Brazil are members of the Asia-Pacific Privacy Pact.
C. The United States leads the world in privacy legislation.
D. The European Union's General Data Protection Regulation provides a single set of rules for all member states.
A. Any information put on legal hold
B. A legal tool used to request suspected evidentiary information that may be used in litigation
C. All information obtained through proper service of the search warrant
D. Any information owned by an organization with the exception of trade secrets.
A. Only hypervisors can be secured, not the underlying virtual machine.
B. Virtual machines are only secured by securing the underlying hardware infrastructure.
C. Virtual machines can be secured as well as the hypervisor and underlying hardware infrastructure.
D. Virtual machines by nature are always insecure.
A. Data warehouses are used for long-term storage of archive data.
B. Data is processed using auto-synthesis to enhance processing speed.
C. Big data is so large that standard relational database management systems do not work. Data must be processed by parallel processors.
D. Data can never be processed in real time.
3.139.80.209