Chapter 11. Monitoring and Auditing

This chapter covers the following subjects:

Monitoring MethodologiesMonitoring the network is extremely important, yet often overlooked by network security administrators. In this section, you learn about the various monitoring methodologies that applications and IDS/IPS solutions use.

Using Tools to Monitor Systems and NetworksHere, we delve into the hands-on again. Included in this section are performance analysis tools such as Performance Monitor and protocol analysis tools, such as Wireshark and Network Monitor.

Conducting AuditsFull-blown audits might be performed by third-party companies, but you as the security administrator should be constantly auditing and logging the network and its hosts. This section gives some good tips when executing an audit and covers some of the tools you would use in a Windows server to perform audits and log them properly.

This chapter covers the CompTIA Security+ SY0-201 objectives 4.4, 4.5, 4.6, and 4.7.

In this chapter, we discuss monitoring and auditing. Key point: Monitoring alone does not constitute an audit, but audits usually include monitoring. So we cover some monitoring methodologies, and monitoring tools before we get into computer security audits. This chapter assumes that you have read through Chapter 10, “Vulnerability and Risk Assessment,” and that you will employ the concepts and tools you learned about in that chapter when performing an audit. Chapter 10 and this chapter are strongly intertwined; I broke them into two chapters because there was a bit too much information for just one, and I want to differentiate somewhat between risk and audits. But regardless, these two chapters are all about putting on your sleuthing hat. You might be surprised, but many networking and operating system security issues can be solved by using that old Sherlockian adage: “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.” This process of elimination is one of the cornerstones of a good IT troubleshooter and works well in the actual CompTIA Security+ exam.

Foundation Topics: Monitoring Methodologies

To operate a clean, secure network, you must keep an eye on your systems, applications, servers, network devices, and the entire network in general. One way to do this is to monitor the network. This surveillance of the network will in of itself increase the security of your entire infrastructure. By periodically watching everything that occurs on the network, you become more familiar with day-to-day happenings and over time get quicker at analyzing whether an event is legitimate. It helps to think of yourself as Hercule Poirot, the Belgian detective—seeing everything that happens on your network, and ultimately knowing everything that happens. It might be a bit egotistical sounding, but whoever said that IT people don’t have an ego?

This surveillance can all be done in one of two ways: manual monitoring and automated monitoring. When manually monitoring the network, you are systematically viewing log files, policies, permissions, and so on. But this can also be automated. For example, several mining programs available can mine logs and other files for the exact information you want to know. In addition, applications such as antivirus, intrusion detection systems and intrusion prevention systems can automatically scan for errors, malicious attacks, and anomalies. The three main types of automated monitoring are signature-based, anomaly-based, and behavior-based. The following acts as a review of the first two types of monitoring and adds the third type—behavior-based monitoring.

Signature-Based Monitoring

In a signature-based monitoring scenario, frames and packets of network traffic are analyzed for predetermined attack patterns. These attack patterns are known as signatures. The signatures are stored in a database that must be updated regularly to have any effect on the security of your network. Many attacks today have their own distinct signatures. However, only the specific attack that matches the signature will be detected. Malicious activity with a slightly different signature might be missed. This makes signature-based monitoring vulnerable to false negatives—when an IDS, IPS, or antivirus system fails to detect actual attack or error. To protect against this, the signature-based system should be updated to bring the system up to date with the latest signatures. When it comes to intrusion detection systems, the most basic form is the signature-based IDS. However, some signature-based monitoring systems are a bit more advanced and use heuristic signatures. These signatures incorporate an algorithm that determines if an alarm should be sounded when a specific threshold is met. This type of signature is CPU-intensive and requires fine-tuning. For example, some signature-based IDS solutions use these signatures to conform to particular networking environments.

Anomaly-Based Monitoring

An anomaly-based monitoring system (also known as statistical anomaly-based) establishes a performance baseline based on a set of normal network traffic evaluations. These evaluations should be taken when the network and servers are under an average load during regular working hours. This monitoring method then compares current network traffic activity with the previously created baseline to detect whether it is within baseline parameters. If the sampled traffic is outside baseline parameters, an alarm will be triggered and sent to the administrator (as long as the system was configured properly). This type of monitoring is dependent on the accuracy of the baseline. An inaccurate baseline increases the likelihood of obtaining false positives. Normally, false positives are when the system reads a legitimate event as an attack or other error.

Behavior-Based Monitoring

A behavior-based monitoring system looks at the previous behavior of applications, executables, and/or the operating system and compares that to current activity on the system. If an application later behaves improperly, the monitoring system will attempt to stop the behavior. This has advantages compared to signature-based and anomaly-based monitoring in that it can to a certain extent help with future events, without having to be updated. However, because there are so many types of applications, and so many types of relationships between applications, this type of monitoring could set off a high amount of false positives. Behavior monitoring should be configured carefully to avoid the system triggering alarms due to legitimate activity.

Table 11-1. Summary of Monitoring Methodologies

image

image

Using Tools to Monitor Systems and Networks

All the methodologies in the world won’t help you unless you know how to use some monitoring tools and how to create baselines. By using performance monitoring gizmos and software, and by incorporating protocol analyzers, you can really “watch” the network and quickly mitigate threats as they present themselves.

In this section, we use the Performance tool in Windows and the Wireshark and Network Monitor protocol analyzers. These are just a couple examples of performance and network monitoring tools out there, but they are commonly used in the field and should give you a decent idea of how to work with any tools in those categories.

Performance Baselining

We mentioned in Chapter 3, “OS Hardening and Virtualization,” that baselining is the process of measuring changes in networking, hardware, software, and so on. Let’s get into baselining a little more and show one of the software tools you can use to create a baseline.

Creating a baseline consists of selecting something to measure and measuring it consistently for a period of time. For example, I might want to know what the average hourly data transfer is to and from a server’s network interface. There are a lot of ways to measure this, but I could possibly use a performance monitoring tool or a protocol analyzer to find out how many packets cross through the server’s network adapter. This could be run for 1 hour (during business hours of course) every day for 2 weeks. Selecting different hours for each day would add more randomness to the final results. By averaging the results together, we get a baseline. Then we can compare future measurements of the server to the baseline. This will help us define what the standard load of our server is, and the requirements our server needs on a consistent basis. It will also help when installing other like computers on the network. The term baselining is most often used to refer to monitoring network performance, but it actually can be used to describe just about any type of performance monitoring. The term standard load is often used when referring to servers. A configuration baseline defines what the standard load of the server is for the relevant object or objects. When it comes to performance monitoring applications objects are all of the components in the server, for example CPU, RAM, hard disk, and so on. They are measured using counters. A typical counter would be the % Processor Time of the CPU. This is used by the Task Manager.

An example of one of these tools is the Performance Monitor tool in Windows. It can help to create baselines measuring network activity, CPU usage, memory, hard drive resources used, and so on. It should also be used when monitoring changes to the baseline. Figure 11-1 shows an example of the Performance Monitor in Windows Vista. The program works basically the same in all version of Windows, be it client or server. In Windows Vista it can be found within the Reliability and Performance Monitor program in Administrative Tools.

Figure 11-1. Performance Monitor in Windows Vista

image

image

The CPU is probably the most important component of the computer. In the figure the CPU counter has hit 100% several times. If the CPU maxes out often, as it is in the figure, a percentage of clients will cannot obtain access to resources on the computer. If the computer is a server it’ll mean trouble. This CPU spiking that we see could be due to normal usage, or it could be due to malicious activity or perhaps bad design. Further analysis would be necessary to determine the exact cause. If the system is a virtual machine, there is a higher probability of CPU spikes. Proper design of VMs is critical, and they must have a strong platform to run on if they are to serve clients properly. Known as a counter, the CPU % Processor Time is just one of many. A smart security auditor will measure the activity of other objects such as the hard drive, paging file, memory (RAM), network adapter, and whatever else is specific to the organization’s needs. Each object has several counters to select from. For example, if you are analyzing a web server, you would probably want to include the HTTP Service Request Queries object, and specifically the ArrivalRate and CurrentQueueSize counters, in your examination.

Now, the figure shows the Performance Monitor screen, but this only gives us a brief look at our system. The window of time is only a minute or so before the information refreshes. However, we can record this information over x periods of time and create reports from the recorded information. By comparing the Performance reports and logs, we ultimately create the baseline. The key is to measure the same way at the same time each day or each week. This provides accurate comparisons. However, keep in mind that performance recording can be a strain on resources. Verify that the computer in question can handle the tests first before initiating them.

Making reports is all fine and good (and necessary), but it is wise to also set up alerts. Alerts can be generated automatically by the system and sent to administrators and other important IT people. These alerts can be set off in a myriad of ways, all of your choosing, for example, if the CPU were to trip a certain threshold or run at 90% for more than a minute (although this is normal in some environments). Or maybe the physical disk was peaking at 100 MB/s for more than 5 minutes. If these types of things happen often, the system should be checked for malicious activity, illegitimate usage, or the need for an upgrade.

A tool similar to Performance Monitor used in Linux systems (for example, SuSE) is called System Monitor. The different versions of Linux also have many third-party tools that can be used for performance monitoring.

Protocol Analyzers

We’ve mentioned protocol analyzers a couple of times already in this book but haven’t really delved into them too much. There are many protocol analyzers available, some free, some not, and some that are part of an operating system. In this section, we focus on two: Wireshark and Network Monitor. Note that network adapters can work in one of two different modes:

Promiscuous mode—When the network adapter captures all packets that it has access to regardless of the destination of those packets.

Nonpromiscuous mode—When a network adapter captures only the packets that are addressed to it specifically.

Packet capturing programs have different default settings for these modes. Some programs and network adapters can be configured to work in different modes.

Protocol analyzers can be very useful in diagnosing where broadcast storms are coming from on your LAN. A broadcast storm (or extreme broadcast radiation) is when there is an accumulation of broadcast and multicast packet traffic on the LAN coming from one or more network interfaces. These storms could be intentional or could happen due to a network application or operating system error. The protocol analyzer can specify exactly which network adapter is causing the storm.

Protocol analyzers can look inside of a packet that makes up a TCP/IP handshake. Information that can be viewed includes the SYN, which is the “synchronized sequence numbers,” and the ACK, which is “acknowledgment field significant.” By using the protocol analyzer to analyze a TCP/IP handshake, you can uncover attacks such as TCP Hijacking. But that is just one way to use a protocol analyzer to secure your network. Let’s talk about a couple protocol analyzers now.

Wireshark

Wireshark (previously known as Ethereal) is a free download that works on several platforms including Windows and Windows portables, UNIX, and Mac. It is meant to capture packets on the local computer that it is installed on. But quite often, this is enough to find out vulnerabilities and monitor the local system and remote systems such as servers. Because Wireshark works in promiscuous mode, it can delve into packets even if they weren’t addressed to the computer it runs on. To discern more information about the remote systems, simply start sessions from the client computer to those remote systems and monitor the packet stream. If that is not enough, the program can be installed on servers as well. However, you should check company policy (and get permission) before ever installing any software on a server.

Imagine that you were contracted to find out whether an organization’s web server was transacting secure data utilizing TLS version 1.0. But the organization doesn’t want anyone logging into the server—all too common! No problem, you could use Wireshark on a client computer, initiate a packet capture, make a connection to the web server’s secure site, and verify that TLS 1.0 is being used by analyzing the packets, as shown in Figure 11-2. If you saw other protocols such as SSL 2.0, that should raise a red flag, and you would want to investigate further, most likely culminating in a protocol upgrade or change.

Figure 11-2. Wireshark Showing a Captured TLS Version 1.0 Packet

image

image

Always take screen captures and save your analysis as proof of the work that you did, and as proof of your conclusions and ensuing recommendations.

Remember that Wireshark can be used with a network adapter configured for promiscuous mode. It is set up by default to collect packets locally and from other sources.

Network Monitor

Network Monitor is a built-in network sniffer used in Windows Server products. Called netmon for short, it behaves in basically the same fashion as Wireshark. However, built-in versions of Network Monitor up until Windows Server 2003 work in nonpromiscuous mode by default. The full version of the program (available with SMS Server or SCCM 2007) can also monitor network adapters on remote computers. For now, we’ll stick to the default Network Monitor version that comes stock with Windows Server 2003.

How about another real-world example? Let’s just say you were contracted to monitor an FTP server. The organization is not sure whether FTP passwords are truly being encrypted before being sent across the network. BTW, some FTP programs with a default configuration do not encrypt the password. You could use the Network Monitor program to initiate a capture of packets on the monitoring server. Then, start up an FTP session on the monitoring server and log in to the FTP server. Afterward, stop the capture and view the FTP packets. Figure 11-3 shows an example of an FTP packet with a clear-text password. Notice that frame 1328 shows the password “locrian” in the details.

Figure 11-3. Network Monitor Showing a Captured FTP Packet with Clear-Text Password

image

image

This particular connection was made with the default FTP client within the Microsoft Command Prompt of a Windows Server 2003 to a built-in FTP server on a separate Windows Server set up with a default configuration. However, you could discern the same information by using Wireshark on a client computer and logging into the FTP server from that client computer.

Clear-text passwords being passed across the network is a definite risk. The vulnerabilities could be mitigated by increasing the level of security on the FTP server and by using more secure programs. For example, if the FTP server were part of Windows IIS, domain-based or other authentication could be implemented. Or perhaps a different type of FTP server could be used, for example Pure-FTPd. And secure FTP client programs could be used as well. Instead of using the Command Prompt or a browser to make FTP connections, the FileZilla or WS_FTP programs could be used.

For step-by-steps on using Wireshark and Network Monitor, see Lab 11-1 in the “Hands-On Labs” section.

SNMP

The Simple Network Management Protocol (SNMP) is a TCP/IP protocol that aids in monitoring network-attached devices and computers. It’s usually incorporated as part of a network management system, such as Windows SMS, or free software, such as Net-SNMP. A typical scenario that uses SNMP can be broken down into three components:

Managed device—Computers or other network-attached devices monitored through the use of agents by a network management system.

Agent—An SNMP agent is software deployed by the network management system that is loaded on managed devices. The software redirects the information that the NMS needs to monitor the remote managed devices.

Network management system (NMS)—The software run on one or more servers that control the monitoring of network attached devices and computers.

So, if the IT director asked you to install agents on several computers and network printers, and monitor them from a server, this would be an example of SNMP and the use of a network management system.

SNMP uses ports 161 and 162. SNMP agents receive requests on port 161; these requests come from the network management system or simply “manager.” The manager receives notifications on port 162.

Because applications that use SNMP versions 1 and 2 are less secure, they should be replaced by software that supports SNMP version 3. SNMPv3 provides confidentiality through the use of encrypted packets that prevents snooping and provides additional message integrity and authentication.

Conducting Audits

Computer security audits are technical assessments made of applications, systems, or networks. They can be done manually or with computer programs. Manual assessments usually include the following:

• Review of security logs

• Review of access control lists

• Review of group policies

• Performance of vulnerability scans

• Review of written organization policies

• Interviewing organization personnel

Programs used to audit a computer or network could be as simple as a program such as Belarc Advisor to more complex programs such as Nsauditor to open source projects such as OpenXDAS.

When I have conducted IT security audits in the past, the following basic steps have helped me organize the entire process:

Step 1. Define exactly what is to be audited.

Step 2. Create backups.

Step 3. Scan for, analyze, and create a list of vulnerabilities, threats, and issues that have already occurred.

Step 4. Calculate risk.

Step 5. Develop a plan to mitigate risk and present it to the appropriate personnel.

Although an independent security auditor might do all these things, a network security administrator will be most concerned with the auditing of files, logs, and systems security settings.

Auditing Files

When dealing with auditing, we are interested in the who, what, and when. Basically, a network security administrator wants to know who did what to a particular resource and when that person did it.

Auditing files can usually be broken down into a three-step process:

Step 1. Turn on an auditing policy.

Step 2. Enable auditing for particular objects such as files, folders, and printers.

Step 3. Review the security logs to determine who did what to a resource and when.

As an example let’s use Windows Vista. First, we would need to turn on a specific auditing policy such as “audit object access.” This can be done within the Local Computer Policy, as shown in Figure 11-4. You can select from several different auditing policies such as logon access and privilege use, but object access is probably the most common, so we’ll use that as the example.

Figure 11-4. Audit Policy Within the Local Computer Policy of a Windows Vista Computer

image

image

Next, we would need to enable auditing for particular objects. Let’s say that we are auditing a folder of data. We would want to go to the Properties dialog box for that folder, then navigate to the Security tab, then click the Advanced button, and finally access the Auditing tab, as shown in Figure 11-5.

Figure 11-5. Auditing Advanced Security Settings for a Folder in Windows Vista

image

From there, we can add users that we want to audit, and we can specify one or more of many different attributes to be audited.

Finally, we need to review the security logs to see exactly what is happening on our system and who is accessing what and when. The security logs will also tell us whether users have succeeded or failed in their attempts to access, modify, or delete objects. And if users deny that they attempted to do something, these logs act as proof that their user account was indeed involved. This is one of several ways of putting nonrepudiation into force. Nonrepudiation is the idea of ensuring that a person or group cannot refute the validity of your proof against them.

A common problem with security logs is that they fail to become populated, especially on older systems. If users complains to you that they cannot see any security events in the Event Viewer, you should ask yourself the following:

• Has auditing been turned on in a policy? And was it turned on in the correct policy?

• Was auditing enabled for the individual object?

• Does the person attempting to view the log have administrative capabilities?

In addition, you have to watch out for overriding policies. By default, a policy gets its settings from a parent policy; you might need to turn the override option off. On another note, perhaps the audit recording failed for some reason. Many auditing systems also have the capability to send an alert to the administrator in the case that a recording fails. Hopefully, the system attempts to recover from the failure and continue recording auditing information while the administrator fixes the issue. By answering all these questions and examining everything pertinent to the auditing scenario, you should be able to populate that security log! Now, security logs are just one component of logging that we cover in the next section.

Logging

Monitoring logs often is an important part of being a security person. Possibly the most important log file in Windows is the Security Log, as shown in Figure 11-6. The figure shows the Security log for Windows Vista, but it works in the same fashion, and can be accessed in virtually the same manner, in all versions of Windows.

Figure 11-6. Security Log in Windows Vista

image

image

The Security log can show whether a user was successful at doing a variety of things including logging on to the local computer or domain; accessing, modifying, or deleting files; modifying policies, and so on. Of course, many of these things need to be configured first before they can be logged. Newer versions of Windows will automatically log such events as logon, or policy modification. All these security log events can be referred to as audit trails. Audit trails are records or logs that show the tracked actions of users, whether the user was successful in the attempt.

A network security administrator should monitor this log file often to keep on top of any breaches, or attempted breaches, of security. By periodically reviewing the logs of applications, operating systems, and network devices, we can find issues, errors, and threats quickly and increase our general awareness of the state of the network.

Several other types of Windows log files should be monitored periodically, including the following:

System—Logs events such as system shut down, or driver failure.

Application—Logs events for operating system applications and third-party programs.

The System and Application logs exist on client and server versions of Windows. A few log files that exist only on servers include the following:

• File Replication Service

• DNS Server

• Directory Service

The File Replication Service log exists on all Windows Servers, the Directory Service log will appear if the server has been promoted to a domain controller, and the DNS Server log will only appear if the DNS service has been installed to the server. We’ve mentioned the importance of reviewing DNS logs previously in Part II, “Network Infrastructure,” but it is worth reminding you that examining the DNS log can uncover unauthorized zone transfers and other malicious or inadvertent activity on the DNS server. And let’s not forget about web servers—by analyzing and monitoring a web server, you can determine whether the server has been compromised. A drop in CPU and hard disk speed are common indications of a web server that has been attacked. Of course, it could just be a whole lot of web traffic! It’s up to you to use the log files to find out exactly what is going on.

Other types of operating systems, applications, and devices will have their own set of log files, for example applications such as Microsoft Exchange and SQL database servers, and firewalls. The firewall log especially is of importance, as shown in Figure 11-7. Note in the figure the dropped packets from addresses on the 169.254.0.0 network, which we know to be the APIPA network number. This is something that should be investigated further because most organizations will have a policy against the use of APIPA addresses.

Figure 11-7. A Basic Firewall’s Log

image

The firewall log can show all kinds of other things such as malicious port scans and other vulnerability scans. For example, when digging into a firewall log event, and you see the following syntax, you would know that a port scan attack has occurred:

S=207.50.135.54:53 – D=10.1.1.80:0

S=207.50.135.54:53 – D=10.1.1.80:1

S=207.50.135.54:53 – D=10.1.1.80:2

S=207.50.135.54:53 – D=10.1.1.80:3

S=207.50.135.54:53 – D=10.1.1.80:4

S=207.50.135.54:53 – D=10.1.1.80:5

Note the source IP address (which is public and therefore most likely external to your network) uses port 53 outbound to run a port scan of 10.1.1.80, starting with port 0 and moving on from there. The firewall is usually the first line of defense, but even if you have an IDS or IPS in front of it, you should review those firewall logs often.

Log File Maintenance and Security

The planning, maintenance, and security of the log files should be thoroughly considered. A few things to take into account include the configuration and saving of the log files, backing up of the files, and securing and encrypting of the files.

Before setting up any type of logging system, you should consider the amount of disk space (or other form of memory) that the log requires. You should also contemplate all the different information necessary to reconstruct logged events later. Are the logs stored in multiple locations? Were they encrypted? Were they hashed for integrity? Also up for consideration is the level of detail you will allow in the log. Verbose logging is something that admins apply to get as much information as possible. Also, is the organization interested in exactly when an event occurred? If so, time stamping should be incorporated. Although many systems do this by default, some organizations will opt to not use time stamping to reduce CPU usage.

Log files can be saved to a different partition of the logging system, or saved to a different system altogether; although, the latter requires a fast secondary system and a fast network. The size and overwriting configuration of the file should play into your considerations. Figure 11-8 shows an example of the properties of a Windows Server 2003 Security log file. Currently, the file is 640 KB but can grow to a maximum size of 131072 KB (128 MB). Although 128 MB might sound like a lot, larger organizations can eat that up quickly because they will probably audit and log a lot of user actions. When the file gets this big, log mining becomes important. There can be thousands and thousands of entries making it difficult for an admin to sort through them all, but several third-party programs can make the mining of specific types of log entries much simpler. You can also note in the figure that the log is set to overwrite events if the log reaches its maximum size. Security is a growing concern with organizations in general, so the chances are that they will not want events overwritten. Instead, you would select Do Not Overwrite Events (Clear Log Manually). As an admin, you would save and back up the log monthly or weekly, and clear the log at the beginning of the new time period to start a new log. If the log becomes full for any reason, you should have an alert set up to notify you or another admin.

Figure 11-8. Windows Server 2003 Security Log Properties Dialog Box

image

image

As with any security configurations or files, the log files should be backed up. The best practice is to copy the files to a remote log server. The files could be backed up to a separate physical offsite location. Or WORM media types (write-once read-many) could be utilized. WORM options such as CD-R and DVD-R are good ways to back up log files, but not re-write optical discs mind you. USB Flash drives and USB removable hard drives should not be allowed in any area where a computer stores log files. One way or another, a retention policy should be in place for your log files—meaning they should be retained for future reference.

Securing the log files can be done in several ways: First, by employing the aforementioned backup methods. Secondly, by setting permissions to the actual log file. Figure 11-8 shows the filename for the Security log: SecEvent.Evt located in C:Windowssystem32config. That is the file you would access to configure NTFS permissions. Just remember that by default, this file will inherit its permissions from the parent folder. File integrity is also important when securing log files. Encrypting the log files through the concept known as hashing is a good way to verify the integrity of the log files if they are moved and or copied. And finally, you could flat out encrypt the entire contents of the file so that other users cannot view it. We talk more about hashing and encryption in Chapter 12, “Encryption and Hashing Concepts,” and Chapter 13, “PKI and Encryption Protocols.”

Auditing System Security Settings

So far, we have conducted audits on object access and log files, but we still need to audit system security settings. For example, we should review user permissions and group policies.

For user access, we are most concerned with shared folders on the network and their permissions. Your file server (or distributed file system server) can easily show you all the shares it contains. This knowledge can be obtained on a Windows Server by navigating to Computer Management > System Tools > Shared Folders > Shares, as shown in Figure 11-9.

Figure 11-9. Network Shares on a Windows Server 2003

image

Notice the IT share. There are a couple of things that pique my interest from the get go. For starters, the shared folder is located in the C: drive of this server. Shared folders should actually be on a different partition, drive, or even a different computer. Secondly, it is in the root. That isn’t a good practice, either (blame the author). Of course, this is just a test folder that we created previously, but we should definitely consider the location of our shared folders.

Note

Some companies opt to secure administrative shares, such as IPC$ and Admin$. Although this isn’t actually an option on servers, it is a smart idea for client computers. The following link talks about hidden and administrative shares in depth: http://support.microsoft.com/kb/314984.

Either way, we now know where the IT share is located and can go to that folder in Windows Explorer and review the permissions for it, as shown in Figure 11-10.

Figure 11-10. The IT Folder’s Permissions

image

In the figure, you can see that the IT1 group has Read & Execute, List Folder Contents, and Read permissions. It is wise to make sure that individual users and groups of users do not have more permissions than necessary, or allowed. It is also important to verify proper ownership of the folder; in this example it can be done by clicking the Advanced button and selecting the Owner tab. Figure 11-11 shows that the Administrator is the owner of this resource. We want to make sure that no one else has inadvertently or maliciously taken control.

Figure 11-11. The IT Folder’s Owner Tab in Advanced Security Settings

image

While you are in the Advanced Security Settings dialog box, you can check what auditing settings have been implemented and if they correspond to an organization’s written policies.

Speaking of policies, computer policies should be reviewed as well. Remember that there might be different policies for each department in an organization. This would match up with the various organizational units on a Windows Server. Figure 11-12 shows the Security Settings section of the IT Policy we created earlier in the book. I haven’t counted them, but there are probably thousands of settings. Due to this, an organization might opt to use a security template; if this is the case, verify that the proper one is being used, and that the settings included in that template take into account what the organization has defined as part of its security plan. Templates are accessed by right-clicking Security Settings and selecting Import Policy. If a template is not being used, you will need to go through as many policy objects as possible, especially things such as password policy, security options, and the audit policy itself.

Figure 11-12. Security Settings Within the IT Policy on a Windows Server 2003

image

Individual computers will probably use User Account Control and adhere to the policies created on the server. A spot check should be made of individual computers to verify that they are playing by the rules. In some cases, an organization will require that all client computers are checked. Auditing can be a lot of work, so plan your time accordingly, and be ready for a few hiccups along the way.

Exam Preparation Tasks: Review Key Topics

Review the most important topics in the chapter, noted with the Key Topics icon in the outer margin of the page. Table 11-2 lists a reference of these key topics and the page numbers on which each is found.

image

Table 11-2. Key Topics for Chapter 11

image

Complete Tables and Lists from Memory

Print a copy of Appendix A, “Memory Tables,” (found on the DVD), or at least the section for this chapter, and complete the tables and lists from memory. Appendix B, “Memory Tables Answer Key,” also on the DVD, includes completed tables and lists to check your work.

Define Key Terms

Define the following key terms from this chapter, and check your answers in the glossary:

Simple Network Management Protocol (SNMP),

baselining,

computer security audits,

security log files,

non-repudiation,

signature-based monitoring,

anomaly-based monitoring,

behavior-based monitoring,

audit trail,

promiscuous mode,

nonpromiscuous mode,

broadcast storm,

SNMP agent,

Network Management System (NMS),

audit trails

Hands-On Labs

Complete the following written step-by-step scenarios. After you finish (or if you do not have adequate equipment to complete the scenario), watch the corresponding video solutions on the DVD.

If you have additional questions, feel free to post them at my website: www.davidlprowse.com in the Ask Dave forum. (Free registration is required to post on the website.)

Equipment Needed

• Windows client (XP or higher).

• Wireshark protocol analyzer.

• Free download: www.wireshark.org/download.html.

• Windows Server (2003 or 2008 preferred) with Network Monitor installed.

• Windows Server (preferably separate and promoted to a domain controller). Used for auditing. You can use a standard member Windows server instead of a domain controller; however, you will be auditing local user accounts instead of domain user accounts. If necessary, you can use a local Windows client, but again, will be relegated to local accounts.

Lab 11-1: Using Protocol Analyzers

In this lab, you capture and analyze various types of packets with the Wireshark and Network Monitor protocol analyzers. You need to have Network Monitor installed to the server and the FTP service (part of IIS). For more information on setting up an FTP server in Windows Server 2003, see the following link: http://support.microsoft.com/kb/323384.

In the video, I use Wireshark version 1.2.8 and Windows Server 2003 Standard.

We start with the Wireshark protocol analyzer.

The steps are as follows:

Step 1. Download, install and run Wireshark. Be sure to install WinPCap if you don’t already have it.

Step 2. Start a capture on the primary network adapter.

Step 3. Verify that the program is capturing packets.

Step 4. Open a browser and access a secure website, such as https://www.paypal.com.

Step 5. Return to Wireshark and stop the capture.

Step 6. Create a filter for SSL/TLS packets by typing SSL in the Filter field and pressing Enter.

Step 7. Locate a “Client Hello” TLS packet and open it.

Step 8. Drill down in the Secure Socket Layer to find out the version number of TLS. Verify that it is Version 1.0 or higher.

Now we’ll move on to using Network monitor:

Step 9. Access a Windows Server 2003 or 2008 and open the Network Monitor program. If it is not installed, install it now. For more information on installing Network Monitor, see the following link: http://technet.microsoft.com/en-us/library/cc780828%28WS.10%29.aspx. When installed, to open Network Monitor, click Start, then Administrative Tools, and finally Network Monitor.

Step 10. Click OK for the pop-up window.

Step 11. Select the network adapter within Local Computer that you want to capture from; it should be the primary network adapter. Then click OK.

Step 12. Click Capture on the menu bar and click Start. This should start the capture.

Step 13. Go to a client computer and ping the server within the command prompt using the following syntax:

Ping –t –l 1500 [ServerIPaddress]

Step 14. Open a second command prompt and connect to the FTP server by using the following syntax:

ftp [ServerIPaddress]

As an example, type ftp 172.29.250.200.

Step 15. Log in as the administrator account of the server. Be sure to use the correct server. Verify that you are logged in; the FTP server should tell you whether it was successful.

Step 16. Run the dir command to view the contents of the FTP server.

Step 17. Return to the server. Stop the capture, and view it by clicking Capture on the menu bar and Stop and View. This should display the results of the capture.

Step 18. Filter the capture for ICMP packets:

A. Click Display on the menu bar and select Filter.

B. Then click the Protocol == Any subset, and click the Edit Expression button.

C. In the Expression window, click the Disable All button.

D. Scroll down to ICMP, select it, and click the Enable button.

E. Click OK, and click OK again.

Now, only ICMP information should display. Review the ICMP log and make sure that your client computer is the only one pinging the server. If other computers are pinging the server, it should be investigated.

Step 19. Filter for FTP packets only. Do this in the same manner that you filtered for ICMP.

Step 20. View the FTP packets and search for the packet containing the password. By default this should be shown in clear text.

Step 21. Drill down into the FTP password packet by double-clicking it. Examine the layers and the ASCII code underneath.

Watch the solution video in the “Hands-On Scenarios” section of the DVD.

Lab 11-2: Auditing Files on a Windows Server

In this lab, you turn on the auditing feature on a Windows Server, permit auditing for specific objects, and analyze the resulting logs and events for those audited objects. In this lab, we use Windows Server 2003, but the procedure is basically the same with other versions of Windows Server. You also need some sort of Windows client to connect to the server. The steps are as follows:

Step 1. Access the Windows Server 2003. You should have an OU and corresponding policy already created. If not, create them now. For more information on how to do this, see Lab 9-1 in Chapter 9, “Configuring Password Policies and User Account Restrictions.”

Step 2. Open the MMC and snap-in the policy associated with the OU into the MMC.

Step 3. Access the policy associated with the OU.

Step 4. Navigate through the following path: Computer Configuration > Windows Settings > Security Settings > Local Policies > Audit Policy.

Step 5. Double-click the Audit object access policy. This displays a properties dialog box.

Step 6. Enable the policy by checking Define these policy settings and selecting the Success and Failure checkboxes. Then click OK.

Step 7. The policy should now show your configuration in the Policy Setting column.

Step 8. Access a shared folder on the server:

A. Verify that you have two basic, populated text files in the folder. If not, create them now.

B. Make sure that one or more users within the correct OU have at least the Read permission to the folder but not Full Control or Modify permissions. Right-click the folder and select Properties.

C. Then, select the Security tab. Verify the user permissions. If you have to change them, be sure to click the Apply button.

Step 9. Click the Advanced button, and click the Auditing tab.

Step 10. Deselect the Allow Inheritable Auditing Entries, button and click Apply.

Step 11. Click the Add button to add users or groups to audit. From here, you would add a person in the same manner as you would when creating permissions. Select one account from your OU. Make sure the account has only the permissions mentioned in Step 8.

Step 12. In the Auditing Entry for [folder] dialog box, check mark Delete Subfolders and Files and Delete in the Successful and Failed columns. Then click OK.

Step 13. Click OK for the Advanced Security Settings Auditing tab.

Step 14. Click OK for the Properties dialog box.

Step 15. Connect from a client computer to the share. In the video, we VPN in, but you could log in to the domain as you normally would, or if you are using a local computer, simply make sure that you are logged in to the local computer as the person that is to be audited.

Step 16. Map a drive to the server’s share that is being audited. For example, the path might be \10.254.254.252it. It all depends on the IP of your server and the name of the share.

Step 17. Attempt to delete the text files. You should not be able to due to permissions.

Step 18. Return to the server and view the Security log. This can be accessed by navigating to Computer Management > System Tools > Event Viewer > Security.

Step 19. Press F5 to refresh the Security log. This should now display Failure Audits for the audited person.

Step 20. Double-click one of the Failure Audit entries and examine the contents. It should show who did what and when.

Note: Sometimes, the parent policy (such as the Default Domain Policy) overrides any child policies such as the one we created. If necessary, turn off the override policy option by

A. Accessing the Properties page of the OU.

B. Go to the Group Policy tab.

C. Highlight the policy and click the Options button.

D. Select the No Override checkbox and click OK.

E. Click OK for the Properties dialog box.

Watch the solution video in the “Hands-On Scenarios” section of the DVD.

View Recommended Resources

Check out these links for more information on the topics covered in this chapter.

• Windows Vista Performance and Reliability Monitoring Step-by-Step Guide: http://technet.microsoft.com/en-us/library/cc722173%28WS.10%29.aspx

• Windows Server 2008: Windows Reliability and Performance Monitor: http://technet.microsoft.com/en-us/library/cc755081%28WS.10%29.aspx

• Wireshark download: www.wireshark.org/download.html

• Wireshark tutorial: www.wireshark.org/news/20060714.html

• How to capture network traffic with Network Monitor: http://support.microsoft.com/kb/148942

• Systems Management Server (SMS) 2003: http://technet.microsoft.com/en-us/library/cc181833.aspx

• System Center Configuration Manager (SCCM) 2007: www.microsoft.com/systemcenter/en/us/configuration-manager.aspx

• Net-SNMP: www.net-snmp.org/

• Windows Server event Logging and Viewing: http://technet.microsoft.com/en-us/library/bb726966.aspx

• How to create and delete hidden or administrative shares on client computers: http://support.microsoft.com/kb/314984

Answer Review Questions

Answer the following review questions. You can find the answers at the end of this chapter.

1. Which of the following is a record of the tracked actions of users?

A. Performance Monitor

B. Audit trails

C. Permissions

D. System and event logs

2. What tool enables you to be alerted if a server’s processor trips a certain threshold?

A. TDR

B. Password cracker

C. Event Viewer

D. Performance Monitor

3. The IT director has asked you to install agents on several client computers and monitor them from a program at a server. What is this known as?

A. SNMP

B. SMTP

C. SMP

D. Performance Monitor

4. One of your coworkers complains to you that they cannot see any security events in the Event Viewer. What are three possible reasons for this? (Select the three best answers.)

A. Auditing has not been turned on.

B. The log file is only 512 KB.

C. The coworker is not an administrator.

D. Auditing for an individual object has not been turned on.

5. Which tool can be instrumental in capturing FTP GET requests?

A. Vulnerability scanner

B. Port scanner

C. Performance Monitor

D. Protocol analyzer

6. Your manager wants you to implement a type of intrusion detection system (IDS) that can be matched to certain types of traffic patterns. What kind of IDS is this?

A. Anomaly-based IDS

B. Signature-based IDS

C. Behavior-based IDS

D. Heuristic-based IDS

7. You are setting up auditing on a Windows XP Professional computer. If set up properly, which log should have entries?

A. Application log

B. System log

C. Security log

D. Maintenance log

8. You have established a baseline for your server. Which of the following is the best tool to use to monitor any changes to that baseline?

A. Performance Monitor

B. Antispyware

C. Antivirus software

D. Vulnerability assessments software

9. In what way can you gather information from a remote printer?

A. HTTP

B. SNMP

C. CA

D. SMTP

10. Which of the following can determine which flags are set in a TCP/IP handshake?

A. Protocol analyzer

B. Port scanner

C. SYN/ACK

D. Performance monitor

11. Which of following is the most basic form of IDS?

A. Anomaly based

B. Behavioral-based

C. Signature-based

D. Statistical-based

12. Which of the following deals with the standard load for a server?

A. Patch management

B. Group policy

C. Port scanning

D. Configuration baseline

13. Your boss wants you to properly log what happens on a database server. What are the most important concepts to think about while you do so? (Select the two best answers.)

A. The amount of virtual memory that you will allocate for this task

B. The amount of disk space you will require

C. The information that will be needed to reconstruct events later

D. Group policy information

14. Which of the following is the best practice to implement when securing logs files?

A. Log all failed and successful login attempts.

B. Deny administrators access to log files.

C. Copy the logs to a remote log server.

D. Increase security settings for administrators.

15. What is the main reason to frequently view the logs of a DNS server?

A. To create aliases

B. To watch for unauthorized zone transfers

C. To defend against denial of service attacks

D. To prevent domain name kiting

16. As you review your firewall log, you see the following information. What type of attack is this?

S=207.50.135.54:53 – D=10.1.1.80:0
S=207.50.135.54:53 – D=10.1.1.80:1
S=207.50.135.54:53 – D=10.1.1.80:2
S=207.50.135.54:53 – D=10.1.1.80:3
S=207.50.135.54:53 – D=10.1.1.80:4
S=207.50.135.54:53 – D=10.1.1.80:5

A. Denial of service

B. Port scanning

C. Ping scanning

D. DNS spoofing

17. Of the following, which two security measures should be implemented when logging a server? (Select the two best answers.)

A. Cyclic redundancy checks

B. The application of retention policies on log files

C. Hashing of log files

D. Storing of temporary files

18. You suspect a broadcast storm on the LAN. Which tool should you use to diagnose which network adapter is causing the storm?

A. Protocol analyzer

B. Firewall

C. Port scanner

D. Network intrusion detection system

19. Which of the following should be done if an audit recording fails?

A. Stopped generating audit records.

B. Overwrite the oldest audit records.

C. Send an alert to the administrator.

D. Shut down the server.

20. Which of the following log files should show attempts at unauthorized access?

A. DNS

B. System

C. Application

D. Security

21. To find out when a computer was shutdown, which log file would an administrator use?

A. Security log

B. System log

C. Application log

D. DNS log

22. Which of the following requires a baseline? (Select the two best answers.)

A. Behavior-based monitoring

B. Performance Monitor

C. Anomaly based monitoring

D. Signature-based monitoring

23. Jason is a security administrator for a company of 4,000 users. He wants to store 6 months of logs to a logging server for analysis. The reports are required by upper management due to legal obligations but are not time-critical. When planning for the requirements of the logging server, which of the following should not be implemented?

A. Performance baseline and audit trails

B. Time stamping and integrity of the logs

C. Log details and level of verbose logging

D. Log storage and backup requirements

24. Your manager wants you to implement a type of intrusion detection system (IDS) that can be matched to certain types of traffic patterns. What kind of IDS is this?

A. Anomaly based IDS

B. Signature-based IDS

C. Behavior-based IDS

D. Heuristic-based IDS

25. Michael has just completed monitoring and analyzing a web server. Which of the following indicates that the server might have been compromised?

A. The Web server is sending hundreds of UDP packets.

B. The Web server has a dozen connections to inbound port 80.

C. The Web server has a dozen connections to inbound port 443.

D. The Web server is showing a drop in CPU speed and hard disk speed.

Answers and Explanations

1. B. Audit trails are records showing the tracked actions of users. The Performance Monitor is a tool in Windows that enables you to track the performance of objects such as CPU, RAM, network adapter, physical disk, and so on. Permissions grant or deny access to resources. To see whether permissions were granted, auditing must be enabled. The system and other logs record events that happened in other areas of the system; for example, events concerning the operating system, drivers, applications, and so on.

2. D. The Performance Monitor can be configured in such a way where alerts can be set for any of the objects (processor, RAM, paging file) in a computer. For example, if the processor were to go beyond 90% usage for more than 1 minute, an alert would be created and could be sent automatically to an administrator. A TDR is a time-domain reflectometer, an electronic instrument used to test cables for faults. A password cracker is a software program used to recover or crack passwords; an example would be Cain & Abel. The Event Viewer is a built-in application in Windows that enables a user to view events on the computer such as warnings, errors, and other information events. It does not measure the objects in a server in the way that Performance Monitor does.

3. A. The Simple Network Management Protocol (SNMP) is used when a person installs agents on client computers to monitor those systems from a single remote location. SMTP is used by e-mail clients and servers. SMP is Symmetric Multi-Processing, which is not covered in the Security+ exam objectives. Performance Monitor enables a person to monitor a computer and create performance baselines.

4. A, C, and D. To audit events on a computer, an administrator would need to enable auditing within the computer’s policy, then turn on auditing for an individual object (folder, file, and so on), and then view the events within the Security log of the Event Viewer. 512 KB is big enough for many events to be written to it.

5. D. A protocol analyzer captures data including things such as GET requests that were initiated from an FTP client. Vulnerability scanners and port scanners look for open ports and other vulnerabilities of a host. Performance Monitor is a Windows program that reports on the performance of the computer system and any of its parts.

6. B. When using an IDS, particular types of traffic patterns refer to signature-based IDS. Heuristic signatures are a subset of signature-based monitoring systems, so signature-based IDS is the best answer. Anomaly-based and behavior-based systems use different methodologies.

7. C. After Auditing is turned on and specific resources are configured for auditing, you need to check the Event Viewer’s Security log for the entries. These could be successful logons or misfired attempts at deleting files; there are literally hundreds of options. The Application log contains errors, warnings, and informational entries about applications. The System log deals with drivers and system files and so on. A System Maintenance log can be used to record routine maintenance procedures.

8. A. Performance monitoring software can be used to create a baseline and monitor for any changes to that baseline. An example of this would be the Performance console window within Windows Server 2003. (It is commonly referred to as the Performance Monitor.) Antivirus and antispyware applications usually go hand-in-hand and are not used to monitor server baselines. Vulnerability assessing software such as Nessus or Nmap are used to see if open ports and other vulnerabilities are on a server.

9. B. SNMP (Simple Network Management Protocol) enables you to gather information from a remote printer. HTTP is the hypertext transfer protocol that deals with the transfer of web pages. A CA is a certificate authority, and SMTP is the Simple Mail Transfer Protocol.

10. A. A protocol analyzer can look inside of the packets that make up a TCP/IP handshake. Information that can be viewed includes SYN, which is synchronize sequence numbers, and ACK, which is acknowledgment field-significant. Port scanners and performance monitor do not have the capability to view flags set in a TCP/IP handshake, nor can they look inside of packets in general.

11. C. Signature-based IDS is the most basic form of intrusion detection systems, or IDS. This monitors packets on the network and compares them against a database of signatures. Anomaly-based, behavioral-based, and statistical-based are all more complex forms of IDS. Anomaly and statistical are often considered to be the same type of monitoring methodology.

12. D. A configuration baseline deals with the standard load of a server. By measuring the traffic that passes through the server’s network adapter, you can create a configuration baseline over time.

13. B and C. It is important to calculate how much disk space you will require for the logs of your database server and verify that you have that much disk space available on the hard drive. It is also important to plan what information will be needed in the case that you need to reconstruct events later. Group policy information and virtual memory is not important for this particular task.

14. C. It is important to copy the logs to a secondary server in case something happens to the primary log server; this way you have another copy of any possible security breaches. Blocking all failed and successful login attempts might not be wise, because it will create many entries. The rest of the answers are not necessarily good ideas when working with log files.

15. B. Network security administrators should frequently view the logs of a DNS server to monitor any unauthorized zone transfers. Aliases are DNS names that redirects to a hostname or FQDN. Simply viewing the logs of a DNS server will not defend against denial-of-service attacks. Domain name kiting is the process of floating a domain name for up to five names without paying for the domain name.

16. B. Information listed is an example of a port scan. The source IP address perpetuating the port scan should be banned or blocked on the firewall. The fact that the source computer is using port 53 is of no consequence during the port scan and does not imply DNS spoofing. It is not a denial-of-service attack; note that the destination IP address ends in 80, but the number 80 is part of the IP and is not the port.

17. B and C. The log files should be retained in some manner either on this computer or on another computer. By hashing the log files, the integrity of the files can be checked even after they are moved. Cyclic redundancy checks or CRCs have to deal with the transmission of Ethernet frames over the network. Temporary files are normally not necessary when dealing with log files.

18. A. A protocol analyzer should be used to diagnose which network adapter on the LAN is causing the broadcast storm. A firewall cannot diagnose attacks perpetuated on a network. Port scanner is used to find open ports on one or more computers. A network intrusion detection system is implemented to locate and possibly quarantine some types of attacks but will not be effective when it comes to broadcast storms.

19. C. If an audit recording fails, there should be sufficient safeguards employed that can automatically send an alert to the administrator, among other things. Audit records should not be overwritten and in general should not be stopped.

20. D. The security log file should show attempts at unauthorized access to a Windows computer. The application log file must deal with events concerning applications within the operating system and some third-party applications. The system log file deals with drivers, system files, and so on. A DNS log will log information concerning the domain name system.

21. B. The system log will show when a computer was shut down (and turned on for that matter or restarted). The security log shows any audited information on a computer system. The application log deals with OS apps and third-party apps. The DNS log shows events that have transpired on a DNS server.

22. A and C. Behavior-based monitoring and anomaly-based monitoring require creating a baseline. Many host-based IDS systems will monitor parts of the dynamic behavior and the state of the computer system. An anomaly-based IDS will classify activities as either normal or anomalous; this will be based on rules instead of signatures. Both behavior-based and anomaly-based monitoring require a baseline to make a comparative analysis. Signature-based monitoring systems do not require this baseline because they are looking for specific patterns or signatures and are comparing them to a database of signatures. The performance monitor program can be used to create a baseline on Windows computers but it does not necessarily require a baseline.

23. A. A performance baseline and audit trails are not necessarily needed. Because the reports are not time-critical, a performance baseline should not be implemented. Auditing this much information could be unfeasible for one person. However, it is important to implement time stamping of the logs and store log details. Before implementing the logging server, Jason should check if he has enough storage and backup space to meet his requirements.

24. B. When using an IDS, particular types of traffic patterns refers to signature-based IDS.

25. D. If the Web server is showing a drop in processor and hard disk speed, it might have been compromised. Further analysis and comparison to a pre-existing baseline would be necessary. All the other answers are common for a web server.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.255.24