Chapter 6

Understanding the Security Concerns Associated with Various Types of Vulnerabilities

This chapter covers the following topics related to Objective 1.6 (Explain the security concerns associated with various types of vulnerabilities) of the CompTIA Security+ SY0-601 certification exam:

  • Cloud-based vs. on-premises vulnerabilities

  • Zero-day

  • Weak configurations

    • Open permissions

    • Unsecure root accounts

    • Errors

    • Weak encryption

    • Unsecure protocols

    • Default settings

    • Open ports and services

  • Third-party risks

    • Vendor management

      • System integration

      • Lack of vendor support

    • Supply chain

    • Outsourced code development

    • Data storage

    • Improper or weak patch management

      • Firmware

      • Operating system (OS)

      • Applications

    • Legacy platforms

    • Impacts

      • Data loss

      • Data breaches

      • Data exfiltration

      • Identity theft

      • Financial

      • Reputation

      • Availability loss

This chapter starts with an overview of cloud-based versus on-premises vulnerabilities. You then learn about zero-day vulnerabilities and related attacks. Weak configurations (such as open permissions, unsecure root accounts, errors, weak encryption, unsecure protocols, default settings, and open ports and services) can allow attackers to breach a network and compromise your systems. This chapter covers the details of those common weaknesses. In this chapter, you also learn about third-party risks, the consequences of improper or weak patch management, and the risk of running legacy platforms that are often vulnerable to attacks. The chapter concludes with an overview of the different types of impact categories when a cybersecurity attack or breach occurs.

“Do I Know This Already?” Quiz

The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Chapter Review Activities” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics, read the entire chapter. Table 6-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes and Review Questions.”

Table 6-1 “Do I Know This Already?” Section-to-Question Mapping

Foundation Topics Section

Questions

Cloud-based vs. On-premises Vulnerabilities

1–2

Zero-day Vulnerabilities

3

Weak Configurations

4–5

Third-party Risks

6

Improper or Weak Patch Management

7

Legacy Platforms

8

The Impact of Cybersecurity Attacks and Breaches

9–10

Caution

The goal of self-assessment is to gauge your mastery of the topics in this chapter. If you do not know the answer to a question or are only partially sure of the answer, you should mark that question as wrong for purposes of the self-assessment. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security.

1. Which cloud service model offers computer networking, storage, load balancing, routing, and VM hosting?

  1. IaaS

  2. PaaS

  3. SaaS

  4. None of these answers are correct.

2. Which of the following is a security concern in cloud deployments?

  1. Encryption

  2. Authentication methods

  3. Identity management

  4. All of these answers are correct.

3. Which type of vulnerability is disclosed by an individual or exploited by an attacker before the creator of the software can create a patch to fix the underlying issue?

  1. Cross-site scripting

  2. Zero-day

  3. Information disclosure

  4. None of these answers are correct.

4. Which of the following can be considered a weak configuration that can allow attackers to perform an attack and compromise systems?

  1. Default settings and passwords

  2. Weak encryption

  3. Open permissions

  4. All of these answers are correct.

5. Which of the following is a weak protocol that should be avoided?

  1. HTTP without encryption

  2. Telnet

  3. FTP without encryption

  4. All of these answers are correct.

6. Which of the following should be considered when assessing third-party risks?

  1. Vendor management

  2. Supply chain

  3. Outsourced code development

  4. All of these answers are correct.

7. In Windows, what is a broadly released fix for a product-specific security-related vulnerability?

  1. Security update

  2. Service pack

  3. Threat model

  4. None of these answers are correct.

8. Which of the following is a disadvantage of running legacy platforms and products?

  1. They are often affected by security vulnerabilities.

  2. They do not have modern security features.

  3. When a device is past the last day of support, vendors will not investigate or patch security vulnerabilities in those devices.

  4. All of these answers are correct.

9. Which of the following could be categorized as different types of negative impact that a security breach could have in a corporation?

  1. Financial

  2. Reputation

  3. Availability loss

  4. All of these answers are correct.

10. Which of the following can be used by attackers to obfuscate their tactics when exfiltrating data from their target (victim)?

  1. Encryption

  2. Tunneling over a known protocol like DNS

  3. Encoding

  4. All of these answers are correct.

Foundation Topics

Cloud-based vs. On-premises Vulnerabilities

As time moves forward, more and more organizations are transferring some or all of their server and network resources to the cloud. Doing so creates many potential hazards and vulnerabilities that must be addressed by the security administrator and by the cloud provider. Top among these concerns are the servers, where all data is stored and accessed. Servers of all types should be hardened and protected from a variety of attacks in an effort to keep the integrity of data from being compromised. However, data must also be available. Consequently, you, as the security administrator, must strike a balance between security and availability. In this section, we discuss cloud-based threats as well as server vulnerabilities and how to combat them effectively.

Up until now we have focused on the individual computer system. Let’s expand our security perimeter to include networks. Network design is extremely important in a secure computing environment. The elements that you include in your design can help to defend against many different types of network attacks. Being able to identify these network threats is the next step in securing your network. If you apply the strategies and defense mechanisms included in this chapter, you should be able to stave off most network-based assaults. Maintaining the security of the servers and network infrastructure of an organization is your job as the security administrator, but with the inclusion of the cloud, the areas of responsibility might vary. These responsibilities depend on how much of the cloud is provided by a third party and how much of the cloud is held privately within the organization’s domain. Whether dealing with cloud providers, onsite cloud-based resources, locally owned servers and networks, or a mixture of all of them, you have a lot of duties and must understand not only security but how computer networking really functions. The CompTIA Security+ exam assumes that you have a working knowledge of networks and that you have the CompTIA Network+ certification or commensurate experience. Hereafter, this book will work within that mindset and will refer directly to the security side of things as it progresses. So, put on your networking hat and let’s begin with network design.

Historically, the term cloud was just a name for the Internet—anything beyond your network that you as a user couldn’t see. Technically speaking, the cloud was the area of the telephone company’s infrastructure; it was everything between one organization’s demarcation point and the demarcation point of another organization. It included central offices, switching offices, telephone poles, circuit-switching devices, packet assemblers/disassemblers (PADs), packet-switching exchanges (PSEs), and so on. In fact, all these things, and much more, are still part of the cloud, in the technical sense. Back in the day, this term was used only by telecommunications professionals and network engineers.

Today, the term cloud has a somewhat different meaning. Almost everyone has heard of it and probably used it to some extent. It is used heavily in marketing, and the meaning is less technical and more service-oriented than it used to be. It takes the place of most intranets and extranets that had existed for decades before its emergence.

In on-premises environments, physical servers and virtual machines control the sending and receiving of all kinds of data over the network, including websites, email and text messaging, and data stored as single files and in database format. A great many of these “servers” are now virtual in the cloud, with more moving there every day. And the cloud, however an organization connects to it, is all about networking. Therefore, the cloud, virtualization, and servers in general are all thoroughly intertwined.

Cloud computing can be defined as a way of offering on-demand services that extend the capabilities of a person’s computer or an organization’s network. These might be free services, such as personal browser-based email from various providers, or they could be offered on a pay-per-use basis, such as services that offer data access, data storage, infrastructure, and online gaming. A network connection of some sort is required to make the connection to the cloud and gain access to these services in real time.

Some of the benefits to an organization using cloud-based services include lowered cost, less administration and maintenance, more reliability, increased scalability, and possible increased performance. A basic example of a cloud-based service would be browser-based email. A small business with few employees definitely needs email, but it can’t afford the costs of an email server and perhaps does not want to have its own hosted domain and the costs and work that go along with that. By connecting to a free web browser–based service, a small business can obtain near unlimited email, contacts, and calendar solutions. But, there is no administrative control, and there are some security concerns, which we discuss shortly.

Cloud computing services are generally broken down into several categories of services:

  • Software as a service (SaaS): This is the most commonly used and recognized of the three categories designed to provide a complete packaged solution. The software is rented out to the user. The service is usually provided through some type of front end or web portal. While the end user is free to use the service from anywhere, the company pays a per-use fee. Examples of SaaS offerings are Office 365, Gmail, Webex, Zoom, Dropbox, and Google Drive.

Note

Often compared to SaaS is the application service provider (ASP) model. SaaS typically offers a generalized service to many users. However, an ASP typically delivers a service (perhaps a single application) to a small number of users.

  • Infrastructure as a service (IaaS): This service offers computer networking, storage, load balancing, routing, and VM hosting. More and more organizations are seeing the benefits of offloading some of their networking infrastructure to the cloud.

  • Platform as a service (PaaS): This service provides various software solutions to organizations, especially the ability to develop applications in a virtual environment without the cost or administration of a physical platform. PaaS is used for easy-to-configure operating systems and on-demand computing. Often, this utilizes IaaS as well for an underlying infrastructure to the platform. Cloud-based virtual desktop environments (VDEs) and virtual desktop infrastructures (VDIs) are often considered to be part of this service but also can be part of IaaS.

  • Security as a service (SECaaS):In this service a large service provider integrates its security services into the company’s or customer’s existing infrastructure. The concept is that the service provider can provide the security more efficiently and more cost effectively than a company can, especially if it has a limited IT staff or budget. The Cloud Security Alliance (CSA) defines various categories to help businesses implement and understand SECaaS, including encryption, data loss prevention (DLP), continuous monitoring, business continuity and disaster recovery (BCDR), vulnerability scanning, and much more.

Note

Periodically, new services will arrive, such as monitoring as a service (MaaS)—a framework that facilitates the deployment of monitoring within the cloud in a continuous fashion. There are many types of cloud-based services. If they don’t fall into the preceding list, then they often fall under the category “anything as a service” (XaaS).

A cloud service provider (CSP) might offer one or more of these services. The National Institute of Standards and Technology (NIST) provides a great resource explaining the different cloud service models in Special Publication (SP) 800-145 (The NIST Definition of Cloud Computing). However, they are summarized here. There are different types of clouds used by organizations: public, private, hybrid, and community.

  • Public cloud: In this type, a service provider offers applications and storage space to the general public over the Internet. Examples include free, web-based email services and pay-as-you-go business-class services. The main benefits of this type of cloud include low (or zero) cost and scalability. Providers of public cloud space include Google Cloud Platform (GCP), Microsoft Azure, and Amazon Web Services (AWS).

  • Private cloud: This type is designed with a particular organization in mind. As security administrator, you have more control over the data and infrastructure. A limited number of people have access to the cloud, and they are usually located behind a firewall of some sort in order to gain access to the private cloud. Resources might be provided by a third party or could come from the server room or data center.

  • Hybrid cloud: This is a mixture of public and private clouds. Dedicated servers located within the organization and cloud servers from a third party are used together to form the collective network. In these hybrid scenarios, confidential data is usually kept in-house.

  • Community cloud: This is another mix of public and private but one where multiple organizations can share the public portion. Community clouds appeal to organizations that usually have a common form of computing and storing of data.

The type of cloud an organization uses will be dictated by its budget, the level of security it requires, and the amount of manpower (or lack thereof) it has to administer its resources. While a private cloud can be very appealing, it is often beyond the ability of an organization, forcing that organization to seek the public or community-based cloud. However, it doesn’t matter what type of cloud is used. Resources still have to be secured by someone, and you’ll have a hand in that security one way or the other.

Cloud security hinges on the level of control you retain and the types of security controls you implement. When an organization makes a decision to use cloud computing, probably the most important security control concern to administrators is the loss of physical control of the organization’s data. A more in-depth list of cloud computing security concerns includes lack of privacy, lack of accountability, improper authentication, lack of administrative control, data sensitivity and integrity problems, data segregation issues, location of data and data recovery problems, malicious insider attacks, bug exploitation, lack of investigative support when there is a problem, and finally, questionable long-term viability. In general, everything that you worry about for your local network and computers! Keep in mind that cloud service providers can be abused as well: attackers often attempt to use providers’ infrastructure to launch powerful attacks.

Solutions to these security issues include the following:

  • Complex passwords: Strong passwords are beyond important; they are critical, as mentioned many times in this text. As of the writing of this book, accepted password schemes include the following:

    • For general security: A minimum of 10 characters, including at least one capital letter, one number, and one special character

    • For confidential data: A minimum of 15 characters, including a minimum of two each of capital letters, numbers, and special characters

When it comes to the cloud, you might just opt to use the second option for every type of cloud. The reasoning is that public clouds can be insecure (you just don’t know), and private clouds will most likely house the most confidential data. To enforce the type of password you want your users to choose, a strong server-based policy is recommended.

However, passwords will not protect your data and systems. Passwords can be stolen and often are reused. This is why multifactor authentication is so important!

  • Powerful authentication methods: Multifactor authentication can offer a certain amount of defense in depth. In this scenario, if one form of authentication is compromised, the other works as a backup. For example, in addition to a password, a person might be asked for biometric confirmation such as a thumbprint or voice authorization, for an additional PIN, or to swipe a smart card. Multifactor authentication may or may not be physically possible, depending on the cloud environment being used, but if at all possible, it should be considered. An example of a multifactor authentication solution is DUO (https://duo.com). It provides a way to integrate multifactor authentication with many different types of deployments, and users can use a phone app to confirm their identity and authenticate to a user or application.

  • Strong cloud data access policies: We’re talking the who, what, and when. When it comes to public clouds especially, you should specifically define which users have access, exactly which resources they have access to, and when they are allowed to access those resources. Configure strong passwords and consider two-factor authentication. Configure policies from servers that govern the users; for example, use Group Policy objects on a Windows Server domain controller. Audit any and all connected devices and apps. Consider storing different types of data with different services. Some services do better with media files, for example. Remember that cloud storage is not backup. Approach the backing up of data as a separate procedure.

  • Encryption: Encryption of individual data files, whole disk encryption, digitally signed virtual machine files…the list goes on. Perhaps the most important is a robust public key infrastructure (PKI), which is discussed further in Chapter 25, “Implementing Public Key Infrastructure.” The reason is that many users will access data through a web browser.

  • Standardization of programming: The way applications are planned, designed, programmed, and run on the cloud should all be standardized from one platform to the next, and from one programmer to the next. Most important is standardized testing in the form of input validation, fuzzing, and known environment, unknown environment, or partially known environment testing.

  • Protection of all the data!: This includes storage-area networks (SANs), general cloud storage, and the handling of big data (for example, astronomical data). When data is stored in multiple locations, it is easy for some to slip through the cracks. Detailed documentation of what is stored where (and how it is secured) should be kept and updated periodically. As a top-notch security administrator, you don’t want your data to be tampered with. Therefore, implementing some cloud-based security controls can be very helpful. For example, consider the following: deterrent controls (prevent the tampering of data), preventive controls (increase the security strength of a system that houses data), corrective controls (reduce the effects of data tampering that has occurred), and detective controls (detect attacks in real time, and have a defense plan that can be immediately carried out).

What else are you, as administrator, trying to protect here? You’re concerned with protecting the identity and privacy of your users (especially executives because they are high-profile targets). You need to secure the privacy of credit card numbers and other super-confidential information. You want to secure physical servers that are part of the server room or data center because they might be part of your private cloud. You desire protection of your applications with testing and acceptance procedures. (Keep in mind that these things all need to be done within contractual obligations with any third-party cloud providers.) And finally, you’re interested in promoting the availability of your data. After all of your security controls and methods have been implemented, you might find that you have locked out more people than first intended. So, your design plan should contain details that will allow for available data, but in a secure manner.

Customers considering using cloud computing services should ask for transparency— or detailed information about the provider’s security. The provider must be in compliance with the organization’s security policies; otherwise, the data and software in the cloud become far less secure than the data and software within the customer’s own network. This concept, and most of the concepts in the first half of this chapter, should be considered when planning whether to have data, systems, and infrastructure contained on-premises, in a hosted environment, on the cloud, or in a mix of those. If there is a mix of on-premises infrastructure and cloud-provider infrastructure, a company might consider a cloud access security broker (CASB)—a software tool or service that acts as the gatekeeper between the two, allowing the company to extend the reach of its security policies beyond its internal infrastructure.

Other “Cloud”-based Concerns

Other technologies to watch out for are loosely connected with what can be called cloud technologies. One example is social media. Social media environments can include websites as well as special applications that are loaded directly on to the computer (mobile or desktop), among other ways to connect, both legitimate and illegitimate. People share the darndest things on social media websites, which can easily compromise the security of employees and data. The point? There are several ways to access social media platforms, and it can be difficult for a security administrator to find every website, application, service, and port that is used by social media. In cases such as these, you might consider implementing more allow lists and block/deny lists to safeguard applications so that users are better locked down.

Another thing to watch for is P2P networks. File sharing, gaming, media streaming, and all the world is apparently available to a user—if the user knows where to look. However, P2P often comes with a price: malware and potential system infiltration. By the latter, I mean that computers can become unwilling participants in the sharing of data on a P2P network. This is one example in which the cloud invades client computers, often without the user’s consent. Access to file sharing, P2P, and torrents also needs a permanent “padlock.”

Then there’s the dark web. Where Batman stores his data… No, the dark web is another type of P2P (often referred to as an F2F, meaning friends to friends) that creates connections between trusted peers (unlike most other P2Ps) but uses nonstandard ports and protocols. This makes it a bit more difficult to detect. The dark web is certainly the safe haven of illegal activities because they are designed specifically to resist surveillance. Computers that are part of an administrator’s network, and more often, virtual machines in the admin’s cloud, can be part of the dark web and can easily go undetected by the admin. In some cases, an employee of the organization (or an employee of the cloud provider) might have configured some cloud-based resources to join the dark web. This can have devastating legal consequences if illegal activities are traced to your organization. Thorough checks of cloud-based resources can help to prevent this situation. Also, screening of employees, careful inspection of service-level agreements with cloud providers, and the use of third-party IT auditors can avoid the possibility of dark web connectivity, P2P links, and improper use of social media.

Server Defense

Now we come down to it. Servers are the cornerstone of data. They store it, transfer it, archive it, and allow or disallow access to it. They need super-fast network connections that are monitored and baselined regularly. They require you to configure policies, check logs, and perform audits frequently. They exist in networks both large and small, within public and private clouds, and are often present in virtual fashion. What it all comes down to is that servers contain the data and the services that everyone relies on. So, they are effectively the most important things to secure on your network.

Let’s break down five types of servers that are of great importance (in no particular order), and look at some of the threats and vulnerabilities to those servers, and ways to protect them.

File Servers

File server computers store, transfer, migrate, synchronize, and archive files. Really any computer can act as a file server of sorts, but examples of actual server software include Microsoft Windows Server and the various types of Linux server versions (for example, Ubuntu Server or Red Hat Server), not to mention UNIX. File servers are vulnerable to the same types of attacks and malware that typical desktop computers are. To secure file servers (and the rest of the servers on this list), you should employ hardening, updating, antimalware applications, software-based firewalls, hardware-based intrusion detection systems (HIDSs), and encryption, and be sure to monitor the server regularly.

Network Controllers

A network controller is a server that acts as a central repository of user accounts and computer accounts on the network. All users log in to this server. An example would be a Windows Server system that has been promoted to a domain controller (running Active Directory). In addition to the attacks mentioned for file servers, a domain controller can be the victim of LDAP injection. It also has Kerberos vulnerabilities, which can ultimately result in privilege escalation or spoofing. LDAP injection can be prevented with proper input validation. But in the specific case of a Windows domain controller, really the only way to keep it protected (aside from the preventive measures mentioned for file servers) is to install specific security software updates for the OS.

Email Servers

Email servers are part of the message server family. Here, a message server is any server that deals with email, faxing, texting, chatting, and so on. For this section let’s concentrate strictly on the email server. The most common of these is Microsoft Exchange. An Exchange Server might run POP3, SMTP, and IMAP, and allow for Outlook web-based connections. That’s a lot of protocols and ports running. So, it’s not surprising to hear some Exchange admins confess that running an email server can be difficult at times, particularly because it is vulnerable to XSS attacks, overflows, DoS/DDoS attacks, SMTP memory exploits, directory traversal attacks, and of course, spam. Bottom line: it has to be patched…a lot.

As an administrator, you need to keep on top of the latest vulnerabilities and attacks and possibly be prepared to shut down or quarantine an email server at a moment’s notice. New attacks and exploits are constantly surfacing because email servers are a common and big target with a large attack surface. For spam, a hardware-based spam filter is most effective (such as one from Barracuda), but software-based filters can also help. To protect the integrity and confidentiality of email-based data, you should consider DLP and encryption. Security could also come in the form of secure POP3 and secure SMTP; for more about the specific secure email protocols, see Chapter 7, “Summarizing the Techniques Used in Security Assessments.” Also, security can come as encrypted SSL/TLS connections, security posture assessment (SPA), secure VPNs, and other encryption types, especially for web-based connections. Web-based connections can be particularly insecure; great care must be taken to secure these connections. For example, push notification services for mobile devices are quite common. While TLS is normally used as a secure channel for the email connection, text and metadata can at times be sent as clear text. A solution to this is for the operating system to use a symmetrical key to encrypt the data payload. Vendors may or may not do this, so it is up to the email administrator to incorporate this added layer of security, or at least verify that push email providers are implementing it.

Thinking a little outside of the box, you could consider moving away from Microsoft (which is the victim of the most attacks) and toward a Linux solution such as the Java-based SMTP server built into Apache, or with a third-party tool such as Zimbra (or one of many others). These solutions are not foolproof, and still need to be updated, but it is a well-known fact that historically Linux has not been attacked as often as Microsoft (in general), though the difference between the two in the number of attacks experienced has shrunk considerably since the turn of the millennium.

Web Servers

The web server could be the most commonly attacked server of them all. Examples of web servers include Microsoft’s Internet Information Services (IIS), Apache HTTP Server (Linux), lighttpd (FreeBSD), Oracle iPlanet Web Server (Oracle), and iPlanet’s predecessor Sun Java System Web Server (Sun Microsystems). Web servers in general can be the victim of DoS attacks, overflows, XSS and XSRF, remote code execution (RCE), and various attacks that make use of backdoors. For example, in IIS, if basic authentication is enabled, a backdoor could be created, and attackers could ultimately bypass access restrictions. An IIS administrator must keep up to date with the latest vulnerabilities by reading Microsoft Security Bulletins, such as this one that addresses possible information disclosure: https://technet.microsoft.com/library/security/ms12-073.

In general, as security administrator, you should keep up to date with Common Vulnerabilities and Exposures (CVE) as maintained by MITRE (https://cve.mitre.org/). The latest CVE listings for applications and operating systems can be found there and at several other websites.

Aside from the usual programmatic solutions to vulnerabilities such as XSS and standard updating and hot patching, you might consider adding and configuring a hardware-based firewall from Cisco, Fortinet, Palo Alto, or other similar company. Of course, HTTPS (be it SSL or, better yet, TLS) can be beneficial if the scenario calls for it. Once a server is secured, you can prove the relative security of the system to users by using an automated vulnerability scanning program (such as Netcraft) that leaves a little image on the web pages stating whether or not the site is secure and when it was scanned or audited.

Apache can be the casualty of many attacks as well, including privilege escalation, code injection, and exploits to the proxy portion of the software. PHP forms and the PHP engine could act as gateways to the Apache web server. Patches to known CVEs should be applied as soon as possible.

Note

You can find a list of CVEs to Apache HTTP Server (and the corresponding updates) at https://httpd.apache.org/security.

When it comes to Apache web servers, you have to watch out for the web server attack called Darkleech. It takes the form of a malicious Apache module (specifically an injected HTML iframe tag within a PHP file). If loaded on a compromised Apache web server, it can initiate all kinds of attacks and deliver various payloads of malware and ransomware. Though Darkleech is not limited to Apache, the bulk of Darkleech-infected sites have been Apache-based.

Note

So much for Microsoft being less targeted than Linux. As time moves forward, it seems that no platform is safe. A word to the wise: don’t rely on any particular technology because of a reputation, and be sure to update and patch every technology you use.

As far as combatting Darkleech, a webmaster can attempt to query the system for PHP files stored in folders with suspiciously long hexadecimal names. If convenient for the organization, all iframes can be filtered out. Of course, the Apache server should be updated as soon as possible, and if necessary, taken offline while it is repaired. In many cases, this type of web server attack is very hard to detect, and sometimes the only recourse is to rebuild the server (or virtual server image) that hosts the Apache server.

Note

Another tool that some attackers use is archive.org. This website takes snapshots of many websites over time and stores them. They are accessible to anyone and can give attackers an idea of older (and possibly less secure) pages and scripts that used to run on a web server. It could be that these files and scripts are still located on the web server even though they are no longer used. This is a vulnerability that you should be aware of. Strongly consider removing older unused files and scripts from web servers.

FTP Server

An FTP server can be used to provide basic file access publicly or privately. Examples of FTP servers include the FTP server built into IIS, the Apache FtpServer, and other third-party offerings such as FileZilla Server and Pure-FTPd.

The standard, default FTP server is pretty insecure. It uses well-known ports (20 and 21), doesn’t use encryption by default, and has basic username/password authentication. As a result, FTP servers are often the victims of many types of attacks. Examples include bounce attacks (when a person attempts to hijack the FTP service to scan other computers), buffer overflow attempts (when an attacker tries to send an extremely long username or password or filename to trigger the overflow), and attacks on the anonymous account (if utilized).

If the files to be stored on the FTP server are at all confidential, you should consider additional security. You can do so by incorporating FTP software that utilizes secure file transfer protocols such as FTPS or SFTP. Additional security can be provided by using FTP software that uses dynamic assignment of data ports, instead of using port 20 every time. Encryption can prevent most attackers from reading the data files, even if they are able to get access to them. Of course, if not a public FTP, the anonymous account should be disabled.

There also are other, more sinister attacks lurking, ones that work in conjunction with the web server, which is often on the same computer, or part of the same software suite—for instance, the web shell. There are plenty of variants of the web shell, but here we describe its basic function. The web shell is a program that is installed on a web server by an attacker and is used to remotely access and reconfigure the server without the owner’s consent.

Web shells are remote access Trojans (RATs) but are also referred to as backdoors because they offer an alternative way of accessing the website for the attacker. The reason the web shell attack is described here in the FTP section is that it is usually the FTP server that contains the real vulnerability—weak passwords. Once an attacker figures out an administrator password of the FTP server (often through brute-force attempts), the attacker can easily install the web shell and effectively do anything desired to that web server (and/or the FTP server). It seems like a house of cards, and in a way, it is.

How can you prevent this from happening? First, you can increase the password security and change the passwords of all administrator accounts. Second, you can eliminate any unnecessary accounts, especially any superfluous admin accounts and the dreaded anonymous account. Next, you should strongly consider separating the FTP server and web server to two different computers or virtual machines. Finally, you should set up automated scans for web shell scripts (usually PHP files) or have the web server provider do so. If the provider doesn’t offer that kind of scanning, you can use a different provider. If a web shell attack is accomplished successfully on a server, you must at the very least search for and delete the original RAT files and at worst reimage the system and restore from backup. This latter option is often necessary if the attacker has had some time to compromise the server. Some organizations have policies that state servers must be reimaged if they are compromised in any way, shape, or form. This solution is a way of starting anew with a clean slate, but it means a lot of configuring. But again, the overall concern here is the complexity of, and the frequency of changing, the password.

That’s the short list of servers. There are plenty of others you need to be cognizant of, including DNS servers, application servers, virtualization servers, firewall/proxy servers, database servers, print servers, remote connectivity servers such as RRAS and VPN, and computer telephony integration (CTI) servers. If you are in charge of securing a server, be sure to examine the CVEs and bulletins for that software and be ready to hot-patch the system at a moment’s notice. This means having an RDP, VNC, or other remote connection to that specific server ready to go on your desktop so that you can access it quickly.

Zero-day Vulnerabilities

A zero-day vulnerability is a type of vulnerability that is disclosed by an individual or exploited by an attacker before the creator of the software can create a patch to fix the underlying issue. Attacks leveraging zero-day vulnerabilities can cause damage even after the creator knows of the vulnerability because it may take time to release a patch to prevent the attacks and fix damage caused by them.

Zero-day attacks can be prevented by using newer operating systems that have protection mechanisms and by updating those operating systems. They can also be prevented by using multiple layers of firewalls and by using application approved lists, which allow only known good applications to run. Collectively, these preventive methods are referred to as zero-day protection.

Table 6-2 summarizes programming vulnerabilities/attacks covered so far.

Table 6-2 Programming Vulnerabilities and Attacks

Vulnerability

Description

Backdoor

An attack placed by programmers, knowingly or inadvertently, to bypass normal authentication, and other security mechanisms in place.

Buffer overflow

A vulnerability that occurs when a process stores data outside the memory that the developer intended.

Remote code execution (RCE)

A situation that occurs when an attacker obtains control of a target computer through some sort of vulnerability, gaining the power to execute commands on that remote computer.

Cross-site scripting (XSS)

A vulnerability that exploits the trust a user’s browser has in a website through code injection, often in web forms.

Cross-site request forgery (XSRF)

A vulnerability that exploits the trust that a website has in a user’s browser, which becomes compromised and transmits unauthorized commands to the website.

Code injection

An attack that occurs when user input in database web forms is not filtered correctly and is executed improperly. SQL injection is a very common example.

Directory traversal

A method of accessing unauthorized parent (or worse, root) directories.

Zero-day

A group of attacks executed on vulnerabilities in software before those vulnerabilities are known to the creator.

The CompTIA Security+ exam objectives don’t expect you to be a programmer, but they do expect you to have a basic knowledge of programming languages and methodologies so that you can help to secure applications effectively. I recommend a basic knowledge of programming languages used to build applications, such as Visual Basic, C++, C#, PowerShell, Java, and Python, as well as web-based programming languages such as HTML, JavaScript, and PHP, plus knowledge of database programming languages such as SQL. This foundation knowledge will help you not only on the exam but also when you, as security administrator, have to act as a liaison to the programming team or if you are actually involved in testing an application.

Weak Configurations

Weak configurations can be leveraged by attackers to compromise systems and networks. Often on-premises and cloud-based systems and applications could be deployed without security in mind. Sometimes, these weak configurations are due to lack of security awareness and understanding of common threats.

The following are some of the most prevalent types of weak configurations.

  • Open permissions: User accounts can be added to individual computers or to networks. For example, a Windows client, Linux computer, or Mac can have multiple users. Larger networks that have a controlling server—for example, a Windows domain controller—enable user accounts that can access one or more computers on the domain. In some cases, users and/or applications are given access (permissions) to resources that they do not need to access (including files, folders, and systems). Applications should be coded and run in such a way as to maintain the principle of least privilege. Users should have access only to what they need. Processes should run with only the bare minimum access needed to complete their functions. However, this can be coupled with separation of privilege, where access to objects depends on more than one condition (for example, authentication plus an encrypted key).

  • Unsecure root accounts: Out-of-the-box offerings should be as secure as possible. If possible, user password complexity and password aging default policies should be configured by the programmer, not the user. Administrative accounts (root accounts in Linux or administrator in Windows) should be protected at all times.

  • Errors: Sometimes weak configurations are due to human errors that could lead to security nightmares. The potential for human error always exists.

  • Weak encryption: Weak encryption or no encryption can be the worst thing that can happen to a wireless or wired network. This situation can occur for several reasons; for example, someone wants to connect an older device or a device that hasn’t been updated, and that device can run only a weaker, older type of encryption. Weak cryptographic implementations can lead to sensitive data being exposed to attackers and could also have a direct impact to privacy. You will learn the basics of cryptographic concepts and encryption in Chapter 16, “Summarizing the Basics of Cryptographic Concepts.”

  • Insecure protocols: Insecure protocols can also lead to the unauthorized exposure of sensitive data and can allow attackers to compromise systems and applications. Protocols such as Telnet, FTP (without encryption), Finger, SSL, old versions of SNMP, and others should be avoided at all times. Table 6-3 shows protocols and related ports, as well as the “more secure options” of some of those protocols.

  • Default settings: A common adage in the security industry is, “Why do you need hackers if you have default settings and passwords?” Many organizations and individuals leave infrastructure devices such as routers, switches, wireless access points, and even firewalls configured with default passwords. Attackers can easily identify and access systems that use shared default passwords. It is extremely important to always change default manufacturer passwords and restrict network access to critical systems. A lot of manufacturers now require users to change the default passwords during initial setup, but some don’t. Attackers can easily obtain default passwords and identify Internet-connected target systems. Passwords can be found in product documentation and compiled lists available on the Internet. Different examples are available in the GitHub repository at https://github.com/The-Art-of-Hacking/h4cker, but dozens of other sites contain default passwords and configurations on the Internet. It is easy to identify devices that have default passwords and that are exposed to the Internet by using search engines such as Shodan (https://www.shodan.io).

  • Open ports and services: Any unnecessary ports should be closed, and any open ports should be protected and monitored carefully. Although there are 1,024 well-known ports, for the exam you need to know only a handful of them, plus some that are beyond 1,024, as shown in Table 6-3. Remember that these inbound port numbers relate to the applications, services, and protocols that run on a computer, often a server. When it comes to the OSI reference model, the bulk of these protocols are application layer protocols. Examples of these protocols include HTTP, FTP, SMTP, SSH, DHCP, and POP3, and there are many more. Because these are known as application layer protocols, their associated ports are known as application service ports. The bulk of Table 6-3 is composed of application service ports. Some of the protocols listed make use of TCP transport layer connections only (for example, HTTP, port 80; and HTTPS, port 443). Some make use of UDP only (for example, SNMP, port 161). Many can use TCP or UDP transport mechanisms. Study Table 6-3 carefully now, bookmark it, and refer to it often.

Table 6-3 Ports and Their Associated Protocols

Port Number

Associated Protocol (or Keyword)

TCP/UDP Usage

Secure Version and Port

Usage

21

FTP

TCP

FTPS, port 989/990

Transfers files from host to host.

22

SSH

TCP or UDP

 

Secure Shell: Remotely administers network devices and systems. Also is used by Secure Copy (SCP) and Secure FTP (SFTP).

23

Telnet

TCP or UDP

 

Remotely administers network devices (deprecated).

25

SMTP

TCP

SMTP with SSL/TLS, port 465 or 587

Sends email.

49

TACACS+

TCP

 

Provides remote authentication.

Can also use UDP, but TCP is the default. Compare with RADIUS.

53

DNS

TCP or UDP

DNSSEC

Resolves hostnames to IP addresses and vice versa.

69

TFTP

UDP

 

Is a basic version of FTP.

80

HTTP

TCP

HTTPS (uses SSL/TLS), port 443

Transmits web page data.

88

Kerberos

TCP or UDP

 

Provides network authentication; uses tickets.

110

POP3

TCP

POP3 with SSL/TLS, port 995

Receives email.

119

NNTP

TCP

 

Transports Usenet articles.

135

RPC/epmap/dcom-scm

TCP or UDP

 

Enables you to locate DCOM ports. Also known as RPC (Remote Procedure Call).

137–139

NetBIOS

TCP or UDP

 

Enables name querying, data sending, and NetBIOS connections.

143

IMAP

TCP

IMAP4 with SSL/TLS, port 993

Retrieves email, with advantages over POP3.

161

SNMP

UDP

 

Remotely monitors network devices.

162

SNMPTRAP

TCP or UDP

 

Sends Traps and InformRequests to the SNMP Manager on this port.

389

LDAP

TCP or UDP

LDAP over SSL/TLS, port 636

Maintains directories of users and other objects.

445

SMB

TCP

 

Provides shared access to files and other resources.

514

Syslog

UDP

A secure version (Syslog over TLS) uses TCP as the transport mechanism and port 6514.

Is used for computer message logging, especially for router and firewall logs.

860

iSCSI

TCP

 

Is an IP-based protocol used for linking data storage facilities.

Also uses port 3260 for the iSCSI target.

1433

Ms-sql-s

TCP

 

Opens queries to Microsoft SQL server.

1701

L2TP

UDP

 

Is a VPN protocol with no inherent security. Often used with IPsec.

1723

PPTP

TCP or UDP

 

Is a VPN protocol with built-in security.

1812/1813

RADIUS

UDP

 

Is an AAA protocol used for authentication (port 1812), authorization, and accounting (port 1813) of users.

Also, ports 1645 and 1646.

3225

FCIP

TCP or UDP

 

Encapsulates Fibre Channel frames within TCP/IP packets.

Contrast with Fibre Channel over Ethernet (FCoE), which relies on the data link layer and doesn’t rely on TCP/IP directly.

3389

RDP

TCP or UDP

 

Remotely views and controls other Windows systems.

3868

Diameter

TCP (or SCTP)

 

Is a protocol that can be used for authentication, authorization, and accounting (AAA). Diameter is an alternative to the RADIUS protocol.

Note

You can find a complete list of ports and their corresponding protocols at http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml. Not all protocols have set port numbers. For example, the Real-Time Transport Protocol (RTP) and Secure RTP (SRTP) use a pair of port numbers determined by the application that is streaming the audio and video information via RTP. They are selected from a broad range of ports (between 16384 and 32767).

Unnecessary applications and services use valuable hard drive space and processing power. More importantly, they can be vulnerabilities to an operating system. That’s why many organizations implement the concept of least functionality. This means that an organization configures computers and other information systems to provide only the essential functions. Using this method, you restrict applications, services, ports, and protocols. This control—called CM-7—is described in more detail by NIST at https://nvd.nist.gov/800-53/Rev4/control/CM-7.

The United States Department of Defense describes this concept in DoD instruction 8551.01 at https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodi/855101p.pdf.

It’s this mindset that can help protect systems from threats that are aimed at insecure applications and services. For example, instant messaging programs can be dangerous. They might be fun for the user but usually are not productive in the workplace (to put it nicely); and from a security viewpoint, they often have backdoors that are easily accessible to attackers. Unless they are required by tech support, they should be discouraged and/or disallowed by rules and policies. You should be proactive when it comes to these types of programs. If a user can’t install an IM program to a computer, then you will never have to remove it from that system. However, if you do have to remove an application like this, be sure to remove all traces that it ever existed. That is just one example of many, but it can be applied to most superfluous programs.

Other programs you should watch out for are remote control programs. Applications that enable remote control of a computer should be avoided if possible. For example, Remote Desktop Connection is a commonly used Windows-based remote control program. By default, this program uses inbound port 3389, which is well known to attackers—an obvious security threat. Consider using a different port if the program is necessary, and if not, make sure that the program’s associated service is turned off and disabled. Check whether any related services need to be disabled as well. Then verify that their inbound ports are no longer functional and that they are closed and secured. Confirm that any shares created by an application are disabled as well. Basically, remove all instances of the application or, if necessary, reimage the computer!

Uninstalling applications on a few computers is feasible, but what if you have a larger network? Say, one with 1,000 computers? You can’t expect yourself or your computer techs to go to each and every computer locally and remove applications. That’s when centrally administered management systems come into play. Examples include Microsoft’s Endpoint Configuration Manager and the variety of mobile device management (MDM) suites available. These programs allow you to manage lots of computers’ software, configurations, and policies, all from a local workstation.

Third-party Risks

When an organization is dealing with computer security, a risk is the possibility of a malicious attack or other threat causing damage or downtime to a computer system. Generally, this is done by exploiting vulnerabilities in a computer system or network. The more vulnerability, the more risk. Smart organizations are extremely interested in managing vulnerabilities and thereby managing risk. Risk management can be defined as the identification, assessment, and prioritization of risks, and the mitigating and monitoring of those risks. Specifically, in relation to computer hardware and software, risk management is also known as information assurance (IA). Many well-known breaches have been caused by attackers leveraging misconfigured or vulnerable systems and applications in third-party vendors (such as a business partner or vendor).

Vendor management is crucial for any organization. Whether you are part of a small company or a global enterprise, third-party vendors have become essential to business operations. These third-party vendors include outsourcing companies, the software and hardware you purchase for your infrastructure, cloud service providers, and more. Risk management should include the evaluation of third-party vendors, along with their security capabilities, and even include security requirements in contract negotiations. In addition, if you are employing contractors from an external vendor, you should always audit the level of access that those contractors have of your resources and data.

Many organizations hire external system integration companies to deploy new technologies and applications. You must understand your system’s integrator security capabilities and understand what risks misconfigurations or vulnerable designs can bring to your organization.

Your organization may also leverage outsourced code development. No software is immune to security vulnerabilities. You must have a way for the vendor to validate and test for any software security vulnerabilities that could be introduced in the outsourced code. You should ask vendors to demonstrate how they test for vulnerabilities including but not limited to static and dynamic analysis, as well as software composition analysis.

Lack of vendor support in current and legacy software and systems also introduces a big risk to your organization. If software and hardware vulnerabilities are not available due to lack of vendor support, those vulnerabilities can be leveraged by attackers to compromise your systems. The risk introduced in legacy platforms and software is covered later in this chapter. You should consistently reassess your vendor processes and support.

In Chapter 5, “Understanding Different Threat Actors, Vectors, and Intelligence Sources,” you learned about supply chain attacks. Supply chain attacks are also called value-chain attacks. One of the most effective attacks for mass compromise is to attack the supply chain of a vendor to tamper with hardware and/or software. This tampering might occur in-house or, previously, while in transit through the manufacturing supply chain. In addition to traditional vendors of software and hardware, cloud offerings are also susceptible to supply chain attacks. Cloud services are an integral part of most businesses nowadays. The compromise of the underlying cloud infrastructure or services can be catastrophic. For example, an attacker implanting a backdoor in all the base images used for your virtual machines in the cloud could lead to massive compromise of your applications and many of their customers.

Insecure data storage is a major concern of most companies nowadays—not only for data stored in your data center but also in the cloud. Without a doubt, the data needs to be encrypted, but more specifically three types of data: data in use, data at rest, and data in transit. Data in processing (also known as data in use) can be described as actively used data undergoing constant change; for example, it could be stored in databases or spreadsheets. Data at rest is inactive data that is archived—backed up to tape or otherwise. Data in transit (also known as data in motion) is data that crosses the network or data that currently resides in computer memory.

Redundant systems and data backups should also be taken seriously. The whole concept revolves around single points of failure. A single point of failure is an element, object, or part of a system that, if it fails, causes the whole system to fail. By implementing redundancy, you can bypass just about any single point of failure. You will learn more about data and system redundancy in Chapter 13, “Implementing Cybersecurity Resilience.”

The computer file system used dictates a certain level of security. On Microsoft computers, the best option is to use NTFS, which is more secure, enables logging (oh so important), supports encryption, and has support for a much larger maximum partition size and larger file sizes. Just about the only place where FAT32 and NTFS are on a level playing field is that they support the same number of file formats. By far, then, NTFS is the best option. If a volume uses FAT or FAT32, it can be converted to NTFS using the following command:

convert volume /FS:NTFS

For example, if you want to convert a USB flash drive named M: to NTFS, the syntax would be

convert M: /FS:NTFS

There are additional options for the convert command. To see them, simply type convert /? at the Command Prompt. NTFS enables file-level security and tracks permissions within access control lists (ACLs), which are a necessity in today’s environment. Most systems today already use NTFS, but you never know about flash-based and other removable media. Using the chkdsk command at the Command Prompt or right-clicking the drive in the GUI and selecting Properties can tell you what type of file system it runs.

Generally, the best file system for Linux systems is ext4. It allows for the best and most configurable security. To find out the file system used by your version of Linux, use the fdisk -l command or df -T command.

System files and folders, by default, are hidden from view to protect a Windows system, but you never know. To permanently configure the system to not show hidden files and folders, navigate to the File Explorer (or Folder) Options dialog box. Then select the View tab, and under Hidden Files and Folders, select the Don’t Show Hidden Files, Folders, or Drives radio button. To configure the system to hide protected system files, select the Hide Protected Operating System Files checkbox, located below the radio button previously mentioned. This way, you turn off the ability to view such files and folders as bootmgr and pagefile.sys. You might also need to secure a system by turning off file sharing, which in most versions of Windows can be done within the Network and Sharing Center.

In the past, I have made a bold statement: “Hard disks will fail.” But it’s all too true. It’s not a matter of if; it’s a matter of when. By maintaining and hardening the hard disk with various hard disk utilities, you attempt to stave off that dark day as long as possible. You can implement several strategies when maintaining and hardening a hard disk:

  • Remove temporary files: Temporary files and older files can clog up a hard disk, cause a decrease in performance, and pose a security threat. It is recommended that Disk Cleanup or a similar program be used. Policies can be configured (or written) to run Disk Cleanup every day or at logoff for all the computers on the network.

  • Periodically check system files: Every once in a while, it’s a good idea to verify the integrity of operating system files. You can do a file integrity check in the following ways:

    • With the chkdsk command in Windows. This command examines the disk and provides a report. It can also fix some errors with the /F option.

    • With the SFC (System File Checker) command in Windows. This utility checks and, if necessary, replaces protected system files. It can be used to fix problems in the operating system and in other applications such as Internet Explorer. A typical command you might type is SFC /scannow. You can use this command if chkdsk is not successful at making repairs.

    • With the fsck command in Linux. This command is used to check and repair a Linux file system. The synopsis of the syntax is fsck [ -sAVRTNP ] [ -C [ fd ] ] [ -t fstype ] [filesys ... ] [--] [ fs-specific-options]. You can find more information about this command at the corresponding man page for fsck. A derivative, e2fsck, is used to check a Linux ext2fs (second extended file system). Also, you can download open-source data integrity tools for Linux, such as Tripwire.

  • Defragment drives: Applications and files on hard drives become fragmented over time. For a server, this could be a disaster because the server cannot serve requests in a timely fashion if the drive is too thoroughly fragmented. To defragment the drive, you can use Microsoft’s Disk Defragmenter, the command-line defrag command, or other third-party programs.

  • Back up data: Backing up data is critical for a company. It is not enough to rely on a fault-tolerant array. Individual files or the entire system can be backed up to another set of hard drives, to optical discs, to tape, or to the cloud. Microsoft domain controllers’ Active Directory databases are particularly susceptible to attack; the system state for these operating systems should be backed up in case the server fails and the Active Directory needs to be recovered in the future.

  • Use restoration techniques: In Windows, restore points should be created on a regular basis for servers and workstations. The System Restore utility (rstrui.exe) can fix issues caused by defective hardware or software by reverting back to an earlier time. Registry changes made by hardware or software are reversed in an attempt to force the computer to work the way it did previously. Restore points can be created manually and are also created automatically by the operating system before new applications, updates, or hardware are installed. macOS uses the Time Machine utility, which works in a similar manner. Though there is no similar tool in Linux, you can back up the ~/home directory to a separate partition. When these contents are decompressed to a new install, most of the Linux system and settings are restored. Another option in general is to use imaging (cloning) software. Remember that these techniques do not necessarily back up data and that you should treat the data as a separate entity that needs to be backed up regularly.

  • Consider whole disk encryption: Finally, you can use whole disk encryption to secure the contents of the drive, making it harder for attackers to obtain and interpret its contents.

A recommendation I give to all my students and readers is to separate the operating system from the data physically. If you can have each on a separate hard drive, it can make things a bit easier just in case the OS is infected with malware (or otherwise fails). The hard drive that the OS inhabits can be completely wiped and reinstalled without worrying about data loss, and applications can always be reloaded. Of course, you should back up settings (or store them on the second drive). If a second drive isn’t available, consider configuring the one hard drive as two partitions—one for the OS (or system) and one for the data. By doing this and keeping a well-maintained computer, you are effectively hardening the OS.

By maintaining the workstation or server, you are hardening it as well. This process can be broken down into six steps (and one optional step):

Step 1. Use a surge protector or UPS. Make sure the computer and other equipment connect to a surge protector, or better yet a UPS if you are concerned about power loss.

Step 2. Update the BIOS and/or UEFI. Flashing the BIOS isn’t always necessary. Check the manufacturer’s website for your motherboard to see if an update is needed.

Step 3. Update the OS. For Windows, this step includes any combination hotfixes and any Windows Updates beyond that, and setting Windows to alert if there are any new updates. For Linux and macOS, it means simply updating the system to the latest version and installing individual patches as necessary.

Step 4. Update antimalware. This step includes making sure that there is a current license for the antimalware (antivirus and antispyware) and verifying that updates are turned on and the software is regularly scanning the system.

Step 5. Update the firewall. Be sure to have some kind of firewall installed and enabled; then update it. If it is Windows Defender Firewall, updates should happen automatically through Windows Updates. However, if you have a SOHO router with a built-in firewall or other firewall device, you need to update the device’s ROM by downloading the latest image from the manufacturer’s website.

Step 6. Maintain the disks. This instruction means running a disk cleanup program regularly and checking to see whether the hard disk needs to be defragmented from once a week to once a month depending on the amount of usage. It also means creating restore points, doing computer backups, or using third-party backup or drive imaging software.

Step 7. (Optional) Create an image of the system. After all your configurations and hardening of the OS are complete, you might consider creating an image of the system. Imaging the system is like taking a snapshot of the entire system partition. That information is saved as one large file, or a set of compressed files that can be saved anywhere. It’s kind of like system restore but at another level. The beauty of this process is that you can reinstall the entire image if your system fails or is compromised, quickly and efficiently, with very little configuration necessary. You need to apply only the latest security and AV updates since the image was created. Of course, most imaging software has a price tag involved, but the investment can be well worth the cost if you are concerned about the time needed to get your system back up and running in the event of a failure. This is the basis for standardized images in many organizations. By applying mandated security configurations, updates, and so on, and then taking an image of the system, you can create a snapshot in time that you can easily revert to if necessary, while being confident that a certain level of security is already embedded into the image.

Note

To clean out a system regularly, consider reimaging it, or if it’s a mobile device, resetting it. Reimaging takes care of any pesky malware by deleting everything and reinstalling to the point in time of the image, or to factory condition. Although you will have to do some reconfigurations, the system will also run much faster because it has been completely cleaned out.

Improper or Weak Patch Management

Patch management is the planning, testing, implementing, and auditing of patches. You must adopt a proper patch management program to include all your operating systems, firmware of your devices in your infrastructure, and applications (on-premises and in the cloud).

To be considered secure, operating systems should have support for multilevel security and be able to meet government requirements. An operating system that meets these criteria is known as a trusted operating system (TOS). Examples of certified trusted operating systems include Windows 10, SELinux, FreeBSD (with the Trusted-BSD extensions), and Red Hat Enterprise Server. For an OS to be considered a TOS, the manufacturer of the system must have strong policies concerning updates and patching.

Even without being a TOS, operating systems should be updated regularly. For example, Microsoft recognizes the deficiencies in an OS and possible exploits that could occur and therefore releases patches to increase OS performance and protect the system.

Before you update, the first thing to do is to find out the version number, build number, and the patch level. For example, in Windows you can find out this information by opening the System Information tool (open the Run prompt and type msinfo32.exe). It is listed directly in the System Summary. You could also use the winver command. In addition, you can use the systeminfo command at the Command Prompt (a great information gatherer!) or simply the ver command.

Then you should check your organization’s policies to see what level a system should be updated to. You should also test updates and patches on systems located on a clean, dedicated, testing network before going live with an update.

Windows uses the Windows Update program to manage updates to the system. Versions before Windows 10 include this feature in the Control Panel. It can also be accessed by typing wuapp.exe at the Run prompt. In Windows 10 it is located in the Settings section or can be opened by typing ms-settings:windowsupdate at the Run prompt.

There are various options for the installation of Windows updates, including Automatic Install, Defer Updates to a Future Date, Download with Option to Install, Manual Check for Updates, and Never Check for Updates. The types available to you will differ depending on the version of Windows. Businesses usually frown on Automatic Install, especially in the enterprise. Generally, as the security administrator, you want to specify what is updated, when it is updated, and to which computers it is updated. This way you eliminate problems such as patch level mismatch and network usage issues.

In some cases, your organization might opt to turn off Windows Update altogether. Depending on your version of Windows, turning it off may or may not be possible within the Windows Update program. However, your organization might go a step further and specify that the Windows Update service be stopped and disabled, thereby disabling updates. This can be done in the Services console window (Run > services.msc) or from the Command Prompt with one of the methods discussed earlier in the chapter; the service name for Windows Update is wuauserv.

Patches and Hotfixes

The best place to obtain patches and hotfixes is from the manufacturer’s website. The terms patches and hotfixes are often used interchangeably. Windows updates are made up of hotfixes. Originally, a hotfix was defined as a single problem-fixing patch to an individual OS or application installed live while the system was up and running and without needing a reboot. However, the meaning of this term has changed over time and varies from vendor to vendor. (Vendors may even use both terms to describe the same thing.) For example, if you run the systeminfo command at the Command Prompt of a Windows computer, you see a list of hotfixes. They can be identified with the letters KB followed by seven numbers. Hotfixes can be single patches to individual applications, or they might affect the entire system.

On the other side of the spectrum, a gaming company might define hotfixes as a “hot” change to the server with no downtime, and no client download is necessary. The organization releases them if they are critical, instead of waiting for a full patch version. The gaming world commonly uses the terms patch version, point release, or maintenance release to describe a group of file updates to a particular gaming version. For example, a game might start at version 1 and later release an update known as 1.17. The .17 is the point release. (This could be any number, depending on the number of code rewrites.) Later, the game might release 1.32, in which .32 is the point release, again otherwise referred to as the patch version. This naming convention is common with other programs as well.

Administrators should keep up with software and hardware provider security advisories and disclosed vulnerabilities. This keeps you “in the know” when it comes to the latest updates. And this advice applies to server and client operating systems, server add-ons such as Microsoft Exchange or SQL Server, Office programs, web browsers, and the plethora of third-party programs that an organization might use. Your job just got a bit busier!

Of course, you are usually not concerned with updating games in the working world; you should remove them from a computer if they are found (unless perhaps you work for a gaming company). Multimedia software such as Camtasia, however, is prevalent in some companies, and web-based software such as a bulletin-board system is also common and susceptible to attack.

Patches generally carry the connotation of a small fix in the mind of the user or system administrator, so larger patches are often referred to as software updates, service packs, or something similar. However, if you were asked to fix a single security issue on a computer, a patch would be the solution you would want. For example, various Trojans attack older versions of Microsoft Office for Mac. To counter them, Microsoft released a specific patch for those versions of Office for Mac that disallow remote access by the Trojans.

Before installing an individual patch, you should determine whether it perhaps was already installed as part of a group update. For example, you might read that a particular version of macOS had a patch released for iTunes, and being an enthusiastic iTunes user, you might consider installing the patch. But you should first find out the version of the OS you are running; it might already include the patch. To find this information, simply click the Apple menu and then click About This Mac.

Remember: All systems need to be patched at some point. It doesn’t matter if they are Windows, macOS, Linux, UNIX, Android, iOS, hardware appliances, kiosks, automotive computers…need I go on?

Unfortunately, sometimes patches are designed poorly, and although they might fix one problem, they could possibly create another, which is a form of software regression. Because you never know exactly what a patch to a system might do, or how it might react or interact with other systems, it is wise to incorporate patch management.

Patch Management

It is not wise to go running around the network randomly updating computers…not to say that you would do so, however! Patching, like any other process, should be managed properly. The process of patch management includes the planning, testing, implementing, and auditing of patches. I use the following four steps; other companies might have a slightly different patch management strategy, but each of the four concepts should be included:

  • Planning: Before actually doing anything, you should set a plan into motion. The first thing that needs to be decided is whether the patch is necessary and whether it is compatible with other systems. The Microsoft Security Compliance Toolkit (SCT) is one example of a program that can identify security misconfigurations on the computers in your network, letting you know whether patching is needed. If the patch is deemed necessary, the plan should consist of a way to test the patch in a “clean” network on clean systems, how and when the patch will be implemented, and how the patch will be checked after it is installed.

  • Testing: Before you automate the deployment of a patch among a thousand computers, it makes sense to test it on a single system or small group of systems first. These systems should be reserved for testing purposes only and should not be used by “civilians” or regular users on the network. I know, this is asking a lot, especially given the number of resources some companies have. But the more you can push for at least a single testing system that is not a part of the main network, the less you will be to blame if a failure occurs!

  • Implementing: If the test is successful, the patch should be deployed to all the necessary systems. In many cases larger updates are done in the evening or over the weekend. Patches can be deployed automatically using software such as Microsoft’s Endpoint Configuration Manager and third-party patch management tools.

  • Auditing: When the implementation is complete, the systems (or at least a sample of systems) should be audited—first, to make sure the patch has taken hold properly, and second, to check for any changes or failures due to the patch. Microsoft’s Endpoint Configuration Manager and third-party tools can be used in this endeavor.

Note

The concept of patch management, in combination with other application/OS hardening techniques, is collectively referred to as configuration management.

Some Linux-based and Mac-based programs and services also were developed to help manage patching and the auditing of patches. Red Hat has services to help system administrators with all the RPMs they need to download and install, which can become a mountain of work quickly! And for those people who run GPL Linux, third-party services are also available. A network with a lot of mobile devices benefits greatly from the use of an MDM platform. Even with all these tools at an organization’s disposal, sometimes patch management is just too much for one person, or for an entire IT department, and an organization might opt to contract that work out.

Several standards such as the Common Security Advisory Framework (CSAF) and the Open Vulnerability and Assessment Language (OVAL) are designed to provide a machine-readable format of security advisories to allow automated assessment of vulnerabilities. CSAF is also the name of the technical committee in the OASIS standards organization. CSAF is the successor of the Common Vulnerability Reporting Framework (CVRF). CSAF enables different stakeholders across different organizations to share critical security-related information in a single format, speeding up information exchange and digestion.

OVAL is an international standard maintained by the Center for Internet Security (CIS). OVAL was originally created by MITRE and then transferred to CIS. You can obtain detailed information about the OVAL specification at https://oval.cisecurity.org. In short, OVAL can be defined in two parts: the OVAL Language and the OVAL Interpreter. OVAL is not a language like C++ but is an XML schema that defines and describes the XML documents to be created for use with OVAL.

OVAL has several uses, one of which is as a tool to standardize security advisory distributions. Software vendors need to publish vulnerabilities in a standard, machine-readable format. By including an authoring tool, definitions repository, and definition evaluator, OVAL enables users to regulate their security advisories. Other uses for OVAL include vulnerability assessment, patch management, auditing, threat indicators, and so on.

Legacy Platforms

You might be surprised at the number of small, medium, and large organizations that are still using legacy platforms and devices that have passed the vendor’s last day of software and hardware support. These legacy devices often are core infrastructure devices such as routers and switches. If you run devices that have passed the last day that a vendor will provide software and hardware fixes, it is almost guaranteed that you will be running vulnerable devices because when devices are past the last day of support, vendors will not investigate or patch security vulnerabilities in those devices. This, in turn, introduces a huge risk to companies and service providers that run these devices.

The Impact of Cybersecurity Attacks and Breaches

It is obvious that major cybersecurity attacks and breaches can be catastrophic to many organizations. You just have to log in to your favorite news site or even Twitter to see the millions of dollars lost because of cybersecurity incidents and breaches.

The following are some of the most common categories of the impact of cybersecurity attacks and breaches:

  • Data loss, data breaches, and data exfiltration: Data loss caused by a breach can range from just one record to millions of records stolen by an attacker. Attackers can exfiltrate data from an organization using different techniques. They also can perform different types of obfuscation and evasion techniques to go undetected (including encoding of data, tunneling, and encryption). The impact on privacy because of a data breach can be very significant. Some of the data from a data breach can be “replaced” (such as a credit card number), but other data (like a Social Security number or health record) cannot.

  • Identity theft: Criminals often use breached data for identity theft. They use or buy personal records and data from illegal sites in the dark web and other sources to steal the identity of individuals.

  • Financial: The impact and consequences of a breach or a cybersecurity incident can lead to fines and lawsuits against the company. In some cases, even executives could be fined or sued.

  • Reputation: The brand and reputation of a company can also be damaged by major cybersecurity incidents and breaches. Customers can lose confidence and trust in the company that has lost their records because of a cybersecurity breach and in some cases because of negligence.

  • Availability loss: Cybersecurity incidents can also lead to denial-of-service conditions that, in turn, can also cause significant financial impact. Outages from cybersecurity incidents could be catastrophic for companies and, in some cases, their customers.

When dealing with dollars, risk assessments should be based on a quantitative measurement of risk, impact, and asset value. This is why you have to build a good impact assessment methodology when you perform risk analysis. An excellent tool to create during risk assessment is a risk register, also known as a risk log, which helps to track issues and address problems as they occur. After the initial risk assessment, you will continue to use and refer to the risk register. It can be a great tool for just about any organization but can be of more value to certain types of organizations, such as manufacturers that utilize a supply chain. In this case, the organization would want to implement a specialized type of risk management called supply chain risk management (SCRM). In this approach, the organization collaborates with suppliers and distributors to analyze and reduce risk.

When it comes to risk assessment, qualitative risk analysis is an assessment that assigns numeric values to the probability of a risk and the impact it can have on the system or network. Unlike its counterpart, quantitative risk assessment, it does not assign monetary values to assets or possible losses. It is the easier, quicker, and cheaper way to assess risk but cannot assign asset value or give a total for possible monetary loss. You will learn more about risk assessment in Chapter 34, “Summarizing Risk Management Processes and Concepts.”

Chapter Review Activities

Use the features in this section to study and review the topics in this chapter.

Review Key Topics

Review the most important topics in the chapter, noted with the Key Topic icon in the outer margin of the page. Table 6-4 lists a reference of these key topics and the page number on which each is found.

Table 6-4 Key Topics for Chapter 6

Key Topic Element

Description

Page Number

Paragraph

Defining cloud computing

138

List

Listing the cloud computing service models

138

Table 6-2

Programming Vulnerabilities and Attacks

149

List

Listing the most prevalent types of weak configurations

150

Table 6-3

Ports and Their Associated Protocols

152

Paragraph

Description of the role of vendor management and system management

155

Paragraph

Description of outsourced code development and security concerns

155

Paragraph

Description of the lack of vendor support that puts an organization at risk

156

Paragraph

Description of data storage concerns

156

List

Listing several strategies when maintaining and hardening a hard disk

157

List

The process of maintaining the workstations and servers

159

Paragraph

Description of patch management

160

List

Listing the patch management strategy phases

163

Section

Legacy Platforms

165

List

Surveying some of the most common categories of the impact of cybersecurity attacks and breaches

165

Define Key Terms

Define the following key terms from this chapter, and check your answers in the glossary:

on-premises

cloud computing

software as a service (SaaS)

infrastructure as a service (IaaS)

platform as a service (PaaS)

public cloud

private cloud

hybrid cloud

community cloud

cloud access security broker (CASB)

Common Vulnerabilities and Exposures (CVE)

zero-day vulnerability

vendor management

system integration

outsourced code development

lack of vendor support

supply chain

data storage

patch management

Trusted Operating System (TOS)

patches

legacy platforms

Review Questions

Answer the following review questions. Check your answers with the answer key in Appendix A.

1. What software tool or service acts as the gatekeeper between a cloud offering and the on-premises network, allowing an organization to extend the reach of its security policies beyond its internal infrastructure?

2. What is the name given to a type of vulnerability that is disclosed by an individual or exploited by an attacker before the creator of the software can create a patch to fix the underlying issue?

3. What cloud architecture model is a mix of public and private, but one where multiple organizations can share the public portion?

4. What type of vulnerability occurs when an attacker obtains control of a target computer through some sort of vulnerability, gaining the power to execute commands on that remote computer?

5. What protocol uses TCP ports 465 or 587 in most cases?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.172.252