Chapter 17. Auditing UNIX and Linux

Solutions in this chapter:

Patching and Software Installation
▪ Minimizing System Services
▪ Logging
▪ File System Access Control
▪ Additional Security Configuration
▪ Backups and Archives
▪ Auditing to Create a Secure Configuration
▪ Auditing to Maintain a Secure Configuration

Introduction

In this chapter we will introduce the concepts of auditing UNIX and Linux. One of the key secrets to auditing UNIX or Linux is to ensure that you have knowledgeable people available for the audit. The UNIX administrator will generally know the aspects of their system that they have configured. This will provide a wealth of information that was not necessarily readily available. (For the purposes of this chapter the term UNIX will be used to refer to both the multitude of actual UNIX systems and their comparable Linux derivatives.) Figure 17.1 shows CIS benchmarks for various versions of Linux.
B9781597492669000175/gr1.jpg is missing
Figure 17.1
CIS Linux Benchmarks and Scoring Tools
When coupled with the various UNIX checklists from sources such as the Centre for Internet Security (CIS) and NIST, the development of a comprehensive UNIX audit program becomes simple. The primary point to remember is that UNIX was designed for programmers. The default UNIX shells are in effect miniature program interpreters and the system is a development environment with a simple and open default security model. UNIX shells are in themselves powerful scripting engines with programming capabilities that range from the ability to implement simple filters and searches and create program batches through to the ability to run complex programs such as Web servers.
The first point to comprehend in order to gain an understanding of UNIX comes from knowing that everything in UNIX is a file. As far as the operating system is concerned UNIX does not differentiate in its treatment of directories, devices or even network sockets. To UNIX, a directory is merely a file that contains an inventory of file-names and “inodes”, index nodes, and MAC times (Modification, Access and Creation). To the UNIX kernel, hardware is only a special type of file and in fact many of the problems associated with UNIX security have come as a consequence of being able to treat everything as a file. Although this simplifies many tasks, it also makes system security more difficult. The result is that it is simple to pipe output to or from any file and thus even directly to hardware. For this reason, the security of device files (such as those in either the /dev or the /devices directory) is paramount to the secure operation of UNIX.
The power and flexibility of UNIX comes from the ease at which information can be written anywhere. For instance, output of a command may be written correctly to a network socket such that it is sent to a remote machine. In fact, tools such as NC (network cat) make it simple to forward even binary images across networks. One such powerful use of this capability is being able to copy or backup entire disk images over a network to a remote host.

Patching and Software Installation

A correctly secured and fully patched host is immune to over 90% of vulnerabilities. As with all security controls though, there needs to be a trade off. The efforts of monitoring every host individually are beyond all but the smallest of sites.
Most operating systems today have the ability to update patches themselves. This ranges from each host automatically going to the vendor's site to centralized servers which an organization can configure to pull patches and issue internally when approved. There are two types of patches:
▪ Security patches
▪ General updates
As security auditors, our focus will be primarily with security patches. This however does not mean that we can ignore general updates; rather we should focus on the details and reasons for the update. For instance, a patch to a financial application may not in itself be a security patch but may indeed have security implications. For instance the increased ability in a software package to provide auditing and enhance the segregation of duties capabilities of the software may be considered a general update but would have clear security implications. Like many things, we need to look at the system holistically. One of the main failings of the UNIX systems audit is to treat the system and application in isolation.

The Need for Patches

This is not to say that network security replaces the need for host security, rather that both have their place. Some hosts (for example web servers in public zones) are more critical than others and more likely to be attacked. In addition, a firewall does little to protect a web based application. For these and many other reasons it is essential to maintain a strong regime of system security.

Obtaining and Installing System Patches

It is crucial that both the UNIX administrator and auditor understand the difference between security and general patches. It is important that all patching be done in an organized manner. A risk management approach needs to be taken to patching systems. The auditor needs to remember that it is too late to patch a system after it is compromised. The only sure way to clean a compromised system is to rebuild it from the ground up.
The first thing to do is find out what patches are required. Nearly all UNIX vendors provide websites with comprehensive information concerning the nature of patches and some of the main risks associated with those patches.
1 Security Patches need to take precedence over other patches. First determine if the following conditions apply:
a) Is the patch required for an active service (that is, a Bind patch for an Internet DNS Server)? If the service being patched is not installed on the system than it may not be necessary to patch the system.
b) Is the service externally vulnerable? It is important to apply security patches for services that are not available externally as well, but the level of risk is lower.
c) Does the patch affect other services on the host? Has the patch been tested on a development or QA system and been found to function correctly in your organizations environment?
2 If the patch affects the system in a non-desirable manner (that is, causes servers to crash or otherwise suffers a measurable reduction in performance) than it would be better to look at other alternatives based on the risk to the system and its value. It may be a better option to filter the service, for example.
3 If the patch is determined to be required to ensure the security of the system than formal patch procedures should be followed for its implementation. Patch processes vary from vendor to vendor. It is essential to understand the methodology used and create a process to effectively implement this process.
There are two main areas that a UNIX auditor needs to consider when auditing system patching. First, does the organization have an effective patch process that is based on risk? Second, is the patch process adhered to? There are two issues here, each of which needs to be addressed. Good corporate governance requires that management implement a policy requiring effective controls. Any such policy is only effective if it leads to a strong process that can provide the desired outcome. To do this any process needs to be measurable. One of the great difficulties in patch management is the allocation of metrics. It is not enough just to measure the number of patches installed or not installed, but rather there needs to be a means of determining whether a patch should be applied or not.
At the least, any patch should be evaluated against existing applications to ensure that the patch will not negatively impact the system it is meant to fix. This is another reason why there are clear benefits to minimizing the number of services and applications provided on any host. Additionally, many standards such as the PCI-DSS (the payment card industry security standards designed to protect credit and payment card information) require that systems are set up to host only individual services.
Although patching is to many people the greatest burden in IT, it is also one of the simplest means of demonstrating a base level due care. The combination of a patch management program and the proof that a program is being used together go a long way to demonstrating effective corporate governance. In the event that a system is compromised due to software vulnerability, there are really two alternatives when an organization is facing a claim for negligence. Either the organization has patched the system and the compromise occurred due to an unknown or undisclosed attack (a zero day vulnerability) or the breach has occurred because of a control failure. In the first instance, negligence would come down to the necessity to demonstrate alternative controls should have been in place. In this instance the onus of proof is on the party seeking to show that your company was negligent.
Alternatively, where a control failure or breach has occurred due to either a system being misconfigured or unpatched, proving that your organization was not negligent will come down to the controls and processes that have been implemented.
With regards to patching, if the organization can demonstrate a risk-based approach and a methodology that provides valid justification for not applying the patch, it is unlikely that they will be found negligent even without applying the patch. Similarly, an effective patch process that has generally been followed but which has suffered some failure leading to a breach due to a miss-configuration also provides a good defense to either avoid or at the least minimize any action for negligence.
As with all controls, the key is to provide evidence. An ongoing audit program that is run on a regular basis over your UNIX systems will provide this evidence. Most modern operating systems, including UNIX, have a patch management system. Some examples are:
Sun Solaris Patch Manager or PatchPro
System Reliability Manager for Sun Management Centre
Linux (Red Hat) Up2date
RH tools - RHN proxy or satellite server

Validating the Patch Process

In validating the patch process, the auditor first needs to download the latest patch information from the respective UNIX vendor and test that any security patches that are recommended for the system have either been installed or alternatively that there is a formal and valid justification for why they have not been installed. The auditor should also always note that some patches may re-enable default configurations on a service. For this reason, it is important to ensure that the administrator has created a backup of a system prior to installing a patch. A good change management process would require that a back-out path has been detailed prior to the implementation of the patch. The process for patching the system should maintain details on obtaining patches and how they need to be tested and installed. Ideally, any patches that are downloaded from the Internet must be validated such as through the use of a hashing algorithm.
This is that the system administrator should, where possible, always verify the digital signature of any signed files. If no digital signature is supplied but a checksum (for example md5) is supplied, then the administrator should verify the checksum information to confirm that they may have retrieved a valid copy of the patch. If only a generic sum checksum is provided, then the process should require that they use this to check the file. Be aware that the sum checksum should not be considered secure. After the patch has been applied it is important to test the system. The administrator should test that the patch has been applied correctly and is operational (that is, check the version of the software and that it functions correctly).
All this provides evidence in support of the process. This in itself will not make a system secure. What it will do is provide evidence that the organization cares about maintaining the security of its systems and data. This evidence will go a long way to demonstrating that the organization was not negligent in the event of a breach. What is important to remember here is not if a breach occurs, but when.
There have traditionally been a number of both commercial and non-commercial tools to check systems patching and vulnerabilities on UNIX systems. Though most of these do not focus specifically on Patch controls but rather scan for vulnerabilities in general, they are none the less (and more so for this fact) an essential part of implementing and installing a secure UNIX (or Linux) system. Some of the non-commercial products have been detailed below.
Tiger Analytical Research Assistant (TARA) is the next stage of the TAMU (Texas A&M University) “tiger” program. Output has been rationalized to provide a more readable report file. TARA has been tested under Red Hat Version 5.x, SGI IRIX, and Solaris. According to the original readme file, tiger is defined as follows:
…tiger is a set of scripts that scan a Un*x system looking for security problems, in the same fashion as Dan Farmer's COPS. ‘tiger’ was originally developed to provide a check of UNIX systems on the A&M campus that want to be accessed from off campus (clearance through the packet filter). As such, we needed something that *anyone* could run if they could figure out how to get it down to their machine.1
COPS is a UNIX security status checker. COPS checks various files and software configurations to see if they have been compromised, and checks to see that files have the appropriate modes and permissions set to maintain the integrity of your security level. The current version makes a limited attempt to detect bugs that are posted in CERT advisories.
Additionally, other packages are available that help you not only audit your system configuration but also automatically change the configuration to improve security. These are generally focused on a specific operating system, however. Here are a couple of examples:
Solaris: Titan Security Toolkitwww.trouble.org/titan/
Linux: Bastille Linuxwww.bastille-linux.org/
Additionally, scanning tools such as Nessus (www.nessus.org/) are able to find a number of unpatched network services. Coupled with the native patch management tools for the system, a comprehensive evidential trail may be created to prove that your organization was not negligent.

Failures to Patch

One of the biggest cases of security incidents is a result of unpatched systems. The failure to patch vulnerable systems in a timely manner results in major risk to the organization.
The vast majority of security attacks and compromises across the Internet today are only successful because of the number of unpatched systems. This is especially the case with self propagating attacks (for example Worms) which rely on a combination of unpatched systems and poor Anti-Virus control processes to take hold initially and to subsequently propagate. Many of the Worms and Virus infections within organizations are still completed by “old” Malware which has had fixes associated with it for many years.
It is essential to develop patch deployment procedures that establish well defined processes within the organization to identify, test, and deploy patches as they are released. This step makes the patch maintenance process much more cost effective.
The patching of system vulnerabilities has become one of the most expensive and time-consuming recurring administrative tasks in the enterprise. The process is also prone to failure, as Viruses and Worms often use unpatched vulnerabilities as the initial entry point into a protected network, and then use other techniques for propagating once inside. Thus, any of the following factors could invalidate the process:
1 When a patch is not identified and installed in time to mitigate damage.
2 Vulnerable systems that were not patched when the patch was deployed.
3 Defective patches that do not properly close the vulnerability.
4 Defective patches.
Unpatched systems can result in other costs to the organization:
1 Costs connected with cleanup after a contamination or security violation.
2 Loss of revenue from system outages and production declines.
3 Loss due to loss of reputation and/or customer assurance.
4 Legal liabilities from breach of sensitive records.
5 Loss or corruption of organizational data.
6 System downtime, inability to continue the activities of the business.
7 Theft of organizational resources.
Table 17.1 may be used as an example Business Application Patching Matrix for a UNIX system.
Table 17.1 Business Application Patching Matrix for a UNIX System
ApplicationRiskCritical IssueMedium Level IssueLow‐Priority Issue
Primary databasesMedium ‐ HighASAPAfter hoursWeekend
Desktop O/SMediumAfter hoursWeekendWeekend
Desktop Applications (e.g. Star Office)Low ‐ MediumAfter hoursWeekendMonthly
E-mail Client SoftwareMedium - HighAfter hours Same DayAfter hoursMonthly
E-mail ServerMedium - HighASAPAfter hoursAfter hours
FirewallsHigh - CriticalASAPAfter hoursWeekend
Inaccessible SystemsLowWeekendWeekendMonthly
Print serverLowAfter hoursWeekendWeekend
Web application serverLow - HighASAPImmediateAfter hours
Web database serverMedium - HighASAPAfter hoursWeekend
Web server (brochure ware)Low - MediumASAPAfter hoursWeekend
Commerce Web serverCriticalASAPASAPAfter hours
When you are developing a patch maintenance process, always ensure that the following points have been taken into account when patching security vulnerabilities:
1 Continuously monitor systems for vulnerabilities.
2 Identify vulnerable systems and determine severity based on a risk management process.
3 Implement a work-around and create a response plan until a patch is available.
4 Monitor and maintain a patch database for the organizations’ systems.
5 Test patches for defects or adverse effects on your systems.
6 For substandard patches, decide on an appropriate course of action
7 Recognize patch affects, such as a need to reboot systems.
8 Install patches in accord with a plan.
9 Confirm patch effectiveness.
10 Confirm patch does not create adverse situations.
11 Review patch deployment.

Example Information Systems Security Patch Release Procedures

The following section provides an example patching process that may be utilised by the organisation and that the auditor can then use to validate and measure this control.

Purpose

To ensure that {organization} environment is up to date from a security patch perspective.
▪ To ensure “attackers” or otherwise unauthorized parties do not take advantage of known security holes.
▪ To deter future attacks on the basis that {organization} is secure (reputation).

Details

Every morning the {systems operator} is to notify {Administrator/ Owner} of any new patches that have been released for the following products:
{Example Only – Add Products being monitored}
▪ Sun Solaris
▪ Microsoft Windows 2000 Server and Back Office products
▪ Checkpoint Firewall-1
▪ NAI Gauntlet
▪ BIND
All URLs to any new patches that are released are to be forwarded through to the following people:
(Insert Contact Person)
Any new Security patches are then to be downloaded to http://intranet.company.com/Support/Patches
(Please note that if no new patches are available an e-mail is to be circulated to all communicating this.)
If a new patch is released, the system owner is to assign the release to a team member by sending an email to a team member designated as responsible for releasing the software to {Insert Group name} stating the following:
A new security patch called [patchname] has been downloaded and is available at http://Intrant.Company.com/Support/Patches{Patch.xxx}. Please install it on all relevant servers.
Note that once the patch is available, it must be released within four (4) hours of initially being downloaded or before noon on Monday if the issue was identified and a patch released over a weekend. Systems engineers should raise an Impact Item for this change (See Procedures for further details), test in QA, mail impacted parties as an informational stating a message similar to that shown in the following example:
All,
The following security patch will be released at [time]. I will be contacting some of you regarding this change to test for any application impact.
Please call me if you have any questions.
Regards [Engineer]

Vendor Contacts/Patch Sources

The following are a small selection of vendors that you may need to obtain patches from. It is essential that a complete list of all vendors utilized by the organization included in the UNIX patch procedures.

Linux

A small selection of the many Linux vendor pages:

OpenBSD

Patches

Minimizing System Services

Deleting all unused services on a host not only helps to make a system more secure but also frees memory and other resources. The fewer services that are installed on a system, the easier it will be to manage. By reducing the number of services on a host, the amount of conflicts and other administrative issues are also reduced.
Many other tasks (such as Patching – security or otherwise) are decreased at the same time as there is less code running on a system. Most attacks against Internet systems are a result of poorly configured and unpatched systems. Good patch management and few services running on a host is a good starting point towards creating a secure environment.

Guidance for Network Services

An unnecessary service is any service which is not needed by the host to complete its functions. For example, a Linux web server with FTP, Apache and SSH with no default FTP pages would have at least FTP as an unnecessary service. It may be argued that SSH is necessary (to load pages – SCP and for administration) but an unused service should never be left enabled. SSH (utilizing SCP) may be used to administratively upload files on this server in a more secure manner than FTP (thus making FTP administratively redundant).
When assessing a system, an auditor should note any network service on the UNIX system that is running. Next, the auditor should be able to either validate that the service is required on the system or alternatively seek a justification as to why the service is running.

Unnecessary Services

It is essential to always ensure that servers are hardened (that is, patched and unused services removed) prior to having a system “go live.” The auditor's role is to verify that any new system is configured against the baseline standard. A default install of nearly any Operating Systems leaves a vast number of services running which, at best, are feasible to never be used, or at worst, leave ports open to external break-ins. The first stage in removing unneeded services is to work out which services are running on a host and to decide which are essential services needed for the operation of the system. UNIX is no different. In fact, the primary difference with UNIX is that although it starts with many enabled services, it can be quite simple to turn these off and configure the host as a bastion running only a single service.
In many cases it is also possible to further restrict the individual services on the host. Many services are configurable with access control conditions or lists to further restrict the services needed on a host. A good example of this would be restricting access via SSH to an administrative LAN using the SSH server configuration directives. Client systems and desktops as well as Servers and network devices come installed with excessive services enabled by default which does not aid in securing a system. The removal of unnecessary services is needed. It is important to remember that this not only makes the system more secure but increases a system's efficiency and thus:
1 Makes the systems better value for money (increases ROI)
2 Makes administration and diagnostics on the host easier.
In this pursuit, netstat is one of the most effective tools available to the auditor. This tool lists all active connections in addition to the ports where programs are listening for connections. Simply use the command “netstat -p -a –inet” for a listing of this information. Note however that many versions of UNIX did not support the “netstat –p” option. Consequently on the systems it may be necessary to use other tools in order to find process information. Read your system manual for more information.

Turning Off Services in UNIX

This process will vary dependant on the version of UNIX or Linux being run. Most settings are contained within configuration files though some UNIX's (such as HP-UX) have a registry system. Always ensure that you have thoroughly investigated the system that you are going to audit before you start the audit.

RPC and Portmapper

UNIX uses “portmap” to register Remote Procedure Call (RPC) programs. If an application wishes to connect to an RPC-based application, it will first query the portmapper for information about the application. This is done in order to save on low numbered ports. The portmapper allows multiple ports to be assigned as they are needed. Unfortunately, the portmapper service may be named in a variety of ways. For this reason it is essential that a checklist is created for your specific system. Many of the aforementioned sites such as SANS, CIS and NIST have created comprehensive lists dedicated to a number of operating systems. Portmapper may be designated under UNIX as portmap, rpc.bind, portmapper or several other possibilities.
The portmapper application is actually an RPC program as well. The distinction is that it always listens on ports 111 TCP and UDP. On certain operating systems such as Solaris, portmapper may also listen on some other high numbered ports. The role the portmapper service is to provide a directory services. These permit applications to register their versions and the port numbers such that applications that may query the portmapper to discover if the service is active and which port number it is associated with. This then allows the application to connect to that port.
The tool “rpcinfo” is a standard tool available on practically all varieties of UNIX. The primary commands that the auditor will need to know include:
▪ “rpcinfo –p” which is used to discover local services, and
▪ “rpcinfo –p <target>” which allows the user to discover remote services

Controlling Services at Boot Time

Before we get into how services are started, we will take a brief look at how their underlying stack may be configured. The reason for this is that individual services will be impacted through the underlying configurations. The file, “/etc/sysctl.conf” is common to the majority of UNIX systems. The contents, configurations and memory processing will vary across systems. The System Control (sysctl) configuration will in the majority of cases control the system configurations that are of prime importance to the auditor. All of the following options may not be found in this file, but they may be included in one format or another:
ip_forward This option lets the IP stack act as a router and forward packets. Multiple interfaces are not required for this functionality.
accept_source_route This setting configures the operating system to accept source routed packets.
tcp_max_syn_backlog This setting allows the configurations of the maximum number of SYNs in the wait state.
rp_filter This setting provides basic IP spoofing protection for incoming packets.
accept_redirects This setting configures the network stack to accept redirect messages and allow them to alter routing tables.
tcp_syncookies This setting provides syn-cookie based protection against syn flood DOS attacks.
send_redirects This setting controls whether or not the system can generate redirect messages.
accept_redirects This setting is a secondary control used to configure redirect acceptance behavior.
The auditor should create a script to test these settings. The benefits are twofold:
1 The settings may be initially tested against an agreed baseline standard
2 The settings may be tested over time such that a system may be compared to its baseline standard and also a change log.

inetd and xinetd

Network services on UNIX start in a variety of different ways. A common method used by many applications is the “Super Daemon”. A daemon on a UNIX system is a process or service that is initiated and which subsequently continues to run without further interaction. It may initiate further actions from time to time or may wait for a network connection before taking any other action. SMTP (the mail daemon) is an example of such a service. The mail forwarder will bind socket (generally to TCP port 25) and wait for a connection from another mail server before it does anything.
The two super daemons are inetd and xinetd. inetd has no access control built into itself by default. It was the original version of the software. Although both versions may be found on most UNIX systems, the added functionality and increased security of xinetd makes it the better choice. The configuration of xinetd is not the same as that for inetd. Instead of a separate consideration file (as is used by inetd), xinetd relies on a particular directory (usually “/etc/xinetd.d”). This directory generally contains an individual confederation file for each of the services that are available and which are set to run at boot on the system. The auditor needs to note that services may be run even without a valid configurations file and in some instances services may not be running where there is a valid configurations file. In some instances services may have a configurations file that is marked “disable = yes”!
The primary reason for choosing xinetd over inetd is that xinetd integrates TCPWrappers into xinetd in order to allow access controls for the individual services. This means that access control through ACLs is offered by the “Super Daemon” without a requirement to call tcpd for each of the services as they are launched.
And the majority of systems (unless specially configured), inetd services will not have particularly strong authentication methods associated with them. Further, inetd –based services do not generally log individual accesses to syslog. TCPwrappers adds the capability to screen access based on the client's IP address creating a simple host-based firewall. This also has the capability to log both successful attempts to access the service and also failed attempts. These logs will contain the IP address of the system that has accessed or attempted to access the service. Configuring the access control lists (ACLs) used by TCPwrappers does potentially take some time and a fair bit of planning. The upside however is that once this file is in place and running the system will be far more secure. One of the key principles of defense in depth is to not rely on single points of failure. Your site may have firewalls at different points on the network, but the addition of access control lists on the system increase the security further for very little cost.

Authentication and Validation

There are a variety of ways in which a user can authenticate in UNIX. The two primary differences involve authentication to the operating system against authentication to an application alone. In the case of an application such as a window manager (for example, X-Window), authentication to the application is in fact of authenticating to the operating system itself. Additionally, authentication may be divided into both local and networked authentication. In either case, the same applications may provide access to either the local or remote system. For instance, X-Window may be used both as a local window manager and as a means of accessing a remote UNIX system. Additionally, network access tools such as SSH provide the capability of connecting to a remote host but may also connect to the local machine by connecting to either its advertised IP address or the local host (127.0.0.1) address.
The UNIX authentication scheme is based on the /etc/passwd file. Pluggable authentication modules (PAM) has extended this functionality and allowed for the integration of many other authentication schemes. PAM was first proposed by Sun Microsystems in 1995 and was integrated into Red Hat Linux the following year. Subsequently, PAM has become the mainstay authentication schema for Linux and many UNIX varieties. PAM has been standardized as a component of the X/Open UNIX standardization process.
This resulted in the X/Open Single Sign-on (XSSO) standard. From the auditor's perspective, PAM, however, necessitates a recovery mechanism that needs to be integrated into the operating system in case a difficulty develops in the linker or shared libraries. The auditor also needs to come to an understanding of the complete authentication and authorization methodology deployed on the system. PAM allows for single sign-on across multiple servers. Additionally, there are a large number of plug-ins to PAM that vary in their strength. It is important to assess the overall level of security provided by these and remember that the system is only as secure as the weakest link.
The fallback authentication method for any UNIX system lies with the /etc/passwd (password) file (see Figure 17.2). In modern UNIX systems this will be coupled with a shadow file. The password file contains information about the user, the user ID (UID), the group ID (GID), a descriptor that is generally taken by the name, the user's home directory, and the users default shell.
B9781597492669000175/gr2.jpg is missing
Figure 17.2
The /etc/passwd File
The user ID and group ID give the system the information needed to match access requirements. The home directory in the password file is the default directory that a user will be sent to in the case of an interactive login. The shell directive sets the initial shell assigned to the user on login. In many cases a user will be able to change directories or initiate an alternative shell, but this at least sets the initial environment. It is important to remember that the password file is generally world readable. In order to correlate user IDs to user names when looking at directory listings and process listings, the system requires that the password file the access to all (at least in read only mode) by all authenticated users.
The password field of the /etc/passwd file has a historical origin. Before the password and show files were split, hashes would be stored in this file. To maintain compatibility, the same format has been used. In modern systems where the password and shadow files are split, an “x” is used to represent that the system has stored the password hashes in an alternative file. If there is a blank space instead of the “x” this represents that the account has no password. It is crucial that the auditor validates the authentication method used.
The default shell may be a standard interactive shell, a custom script or application designed to limit the functionality of the user or even a false shell designed to restrict the use and stop interactive logins. False shells are generally used in the case of service accounts. This allows the account to login (such as in the case of “lp” for print services) and complete the task it is assigned. Additionally, users may be configured to run an application. A custom script could be configured to start the application allowing the user limited access to the system and to then log the user off the system when they exit the application. It is important for the auditor to check that breakpoints cannot be set allowing the user to gain an interactive shell. Further, in the case of the application access, it is also important to check that the application does not allow the user to spawn an interactive shell if this is not desired.
As was previously mentioned, the majority of modern UNIX systems deploy a shadow file. This file is associated with the password file, but unlike the password file should not be accessible (even to read) by the majority of users on the system. The format of this file is:
UserPassword_HashLast_ChangedPassword Policy
This allows the system to match the user and other information in the shadow file to the password file. The password is in actuality a password hash. The reason that this should be protected comes to the reason that the file first came into existence. In the early versions of UNIX there was no shadow file. Since the password file was world readable, a common attack was to copy the password file and use a dictionary to “crack” the password hashes. By splitting the password and shadow file, the password hash is not available to all users and thus it makes it more difficult for a user to attack the system. The password hash function always creates the same number of characters (this may vary from system to system based on the algorithm deployed, such as MD5 and DES).
UNIX systems are characteristically configured to allow zero days between changes and 99,999 days between changes. In effect this means that the password policies are ineffective. The fields that exist in the shadow file are detailed below:
▪ The username
▪ The password hash
▪ The number of days since 01 Jan 1970 that password was last changed
▪ The number of days that must past before password can be changed
▪ The number of days after which password must be changed
▪ The number of days before expiration that user is warned,
▪ The number of days after expiration that account is disabled
▪ The number of days since 01 Jan 1970 that account has been disabled
Being that the hash function will always create a password hash of the same length, it is possible to restrict logins by changing the password hash variable in the shadow file. For instance, changing the password hash field to something like “No_login” will create a disabled account. As this string is less than the length of the password hash, no password hash could ever be created matching that string. So in this instance we have created an account that is not disabled but will not allow interactive logins.
Many systems also support complex password policies. This information is generally stored in the “password policy” section of the show file. The password policy generally consists of the minimum password age, maximum password age, expiration warning timer, post expiration disable timer, and a count for how many days an account has been disabled. Most system administrators do not know how to interpret the shadow file. As an auditor, knowledge of this information will be valuable. Not only will it allow you to validate password policy information, but it may also help in displaying a level of technical knowledge.
When auditing access rights, it is important to look at both how the user logs in and where they log in from. Always consider the question of whether users should be able to log in to the root account directly. Should they be able to do this across the network? Should they authenticate to the system first and then re-authenticate as root (using a tool such as “su” or “SUDO”)? When auditing the system, these are some of the questions that you need to consider.
Many UNIX systems control this type of access using the “/etc/securetty” file. This file includes an inventory of all of the“ ttys” used by the system. When auditing the system it is important the first collated a list of all locations that would be considered secure enough to sanction the root user to log into from these points. When testing the system verify that only terminals that are physically connected to the server can log into the system as root. Generally, this means that there is either a serial connection to a secure management server or more likely it means allowing connections only from the root console itself. It is also important to note that many services such as SSH have their own configuration files which allow all restrict authentication from root users. It is important to check not only the “/etc/securetty” file but any other related configurations files associated with individual applications.
Side note: TTY stands for teletype. Back in the early days of UNIX, one of the standard ways of accessing a terminal was via the teletype service. Although this is one of the many technologies that have faded into obscurity, UNIX was first created in the 1960s and 70s. Many of the terms have come down from those long-distant days.

Logging

There are a wide variety of logging functions and services on UNIX. Some of these, such as the Solaris audit facility, are limited to a particular variety of UNIX. It is important that auditors become familiar with the logging deployed on the UNIX system that they are auditing. In particular, have and look at the syslog configuration file, the “/var/log” and “/var/run” directories and check if there are any remote log servers. Syslog is a network service that is most commonly run locally. This allows for the capability of sharing logs to a remote system.

Syslog and Other Standard Logs

There are five primary log files that will exist on nearly any UNIX system (the location may vary slightly). These have been listed in Table 17.2.
Table 17.2 The Five Primary UNIX Log Files
Log FileDescription
/var/log/btmpbtmp contains the failed login history
/var/log/messagesis the default location for messages from the syslog facility
/var/log/secureis the default log for access and authentication
/var/run/utmputmp contains summary of currently logged on users
/var/log/wtmpwtmp details the history of logins and logouts on the system
The bad logon attempt file (“/var/log/btmp”) is a semi-permanent log (such as wtmp) that tracks failed login attempts. This file is a binary format and is read using the “lastb” command. In many systems the btmp file will not be created by default. If this folder does not exist the system will not log to it. Any audit of a UNIX system should validate the existence of this file and ensure that it is functioning correctly. A way to validate that this file is working correctly is to attempt to log into the system using a set of invalid credentials. If the log is working correctly, an entry should be recorded noting the auditor's failed attempt. It is important that this file is restricted so the only root can access or change it. General users have no reason to see failed attempts and should never be a change or delete this file.
The messages log (“/var/log/messages”) or at times also the default syslog (on some systems this file will be named “/var/log/syslog”) contains by default the sum of the system messages. Depending on the consideration of the syslog configuration file (commonly “/etc/syslog.conf”), this may contain failed drivers, debug information and many other messages associated with the running of a UNIX system.
The “secure” log (“/var/log/secure”) is designed to record the security and authentication events that occur on the system. By default, applications such as TCPwrappers will log to this file. In addition, the PAM system and “login” facilities will write to this file on most UNIX systems.
The utmp file (“/var/run/utmp”) contains a point in time view of the users that logged on to the system. This file is used by a number of applications and utilities (such as the “finger” and “who” commands). This file is volatile in that it will not survive a system boot. Further, when the user logs out of the system their entry is removed. This file does not contain historical data. It is possible to gain a snapshot of user information at a point in time through this file. This information includes the username, terminal identifier, the time that the user logged in to the system and also where they log in from (which may be a local TTY or remote network host). Most rootkits will change the functionality of this file in an attempt to hide themselves.
The wtmp file (“/var/log/wtmp”) is a binary file similar to “utmp”. This file is also utilized by applications such as “finger”, “last”, and “who” and contains much of the same information as “utmp”. The primary difference however is that it is more permanent in nature. This file provides a formal audit trail of user access and will also record system boots and other events. This file is commonly used when investigating an incident. The “last” command uses this file to display a list of accesses to the system. It will display a historic list as well as listing any user who was still logged onto the system. Like many other UNIX logging facilities it must be activated.
Most UNIX systems (and any that are configured correctly) will rotate logs periodically. This may be done through an automated facility such as “cron” or through some other application. It is important to both verify and validate how the log files are being rotated, whether they are being stored in an offline facility, but they have been backed up and lastly that they are maintained online for an adequate period of time. Regulatory standards such as PCI-DSS version 1.1 require that system logs are not only maintained, but they are accessible on line for a minimum period of time (in this case 90 days). The auditor should ensure that all log files meet the minimum requirements for storage. In addition, always consider long-term data retention needs and the capability to restore logs after an extended period of time. Such log recovery may require that hardware and software associated with the previous system are maintained for a fair number of years (in the case of financial systems this could be a period of six years following the decommissioning of the system).

System Accounting and Process Accounting

Accounting reports created by the system accounting service present the UNIX administrator with the information to assess current resource assignments, set resource limits and quotas, and predict future resource requirements. This information is also valuable to the auditor and allows for the monitoring of system resourcing. It is often forgotten that audit is about system use as well as security.
When the system accounting has been enabled on a UNIX system, the collection of statistical data will begin when the system starts or a least from the moment that the accounting service is initiated. The standard data collected by system accounting will include the following categories:
▪ Connect session statistics
▪ Disk space utilization
▪ Printer use
▪ Process use
The accounting system process starts with the collection of statistical data from which summary reports can be created. These reports can assist in system performance analysis and offer the criteria necessary to establish an impartial customer charge back billing system or many other functions related to the monitoring of the system. A number of the individual categories of statistics collected have been listed in the sections that follow.

Connect Session Statistics

Connect-session statistics allow an organization to bill, track or charge access based on the tangible connect time. Connect-session accounting data, associated with user login and logout, is composed by the init and login commands. When a user logs into the UNIX system, the login program makes an entry in the “wtmp” file. This file will contain the following user information:
▪ Date of login/logout
▪ Time of login/logout
▪ Terminal port
▪ User name
This data can be utilized in the production of reports containing information valuable to both the auditor and system administrator. Some of the information that can be extracted includes:
▪ Connect time seconds used
▪ Date and starting time of connect session
▪ Device address of connect session
▪ Login name
▪ Number of prime connect time seconds used
▪ Number of nonprime connect time seconds used
Number of seconds elapsed from Jan 01st 1970 to the connect-session start time
▪ Process usage
▪ User ID (UID) associated with the connect-session
It is also possible to gather statistics about individual processes using system accounting. Some areas that may be collected include:
▪ Elapsed time and processor time consumed by the process
▪ First eight characters of the name of the command
▪ I/O (Input/output) statistics
▪ Memory usage
▪ Number of characters transferred
▪ Number of disk blocks read or written by the process
▪ User and group numbers under which the process runs
Many UNIX systems maintain statistical information in a “pacct” or process account database or accounting file. This database is commonly found in the “/var/adm/pacct” file, but like many UNIX log files, this will vary from system to system. The accounting file is used by many of the system and process accounting commands. When a process terminates, the kernel writes information explicit to the particular process into the “pacct” file. This file consists of the following information:
▪ Command used to start the process
▪ Process execution time
▪ Process owner's user ID
When system accounting is installed and running on a UNIX system, commands to display, report, and summarize process information will be available. Commands such as “ckpacct” can be used by the administrator or auditor to ensure that the process accounting file (“pacct”) remains under a set size and thus is stopped from either growing too large or possibly impacting system performance in other ways.

Disk Space Utilization

System accounting provides the ability for the auditor to receive information concerning the disk utilization of the users. As it is possible to restrict users to a specified disk usage limit, the auditor may need to validate usage through a disk quota system. This may be monitored and tested to ensure users are adhering to limits. This allows an unwary client to be charged fees that are correctly associated with another account. Disk usage commands perform three basic functions:
▪ Collect disk usage by filesystem
▪ Gather disk statistics and maintain them in a format that may be used by other system accounting commands for further reporting
▪ Report disk usage by user
Note: it is necessary to be aware that users can avoid charges and quota restrictions for disk usage by changing the ownership of their files to that of another user. The “chown” command provides a simple method for users to change ownership of files. Coupled with the ability to set access permissions (such as through the use of the “chmod” command), a user could create a file owned by another party that they could still access.

Printer Usage

Printer usage data is stored in the “qacct” file (this is commonly located in “/var/adm/qacct” on many systems though this varies). The “qacct” file is created using an ASCII format. The qdaemon writes ASCII data to the “qacct” file following the completion of a print job. This file records printer queue data from each print session and should at a minimum contain the following fields:
▪ User Name
▪ User number(UID)
▪ Number of pages printed

Automatic Accounting Commands

To accumulate accounting data, the UNIX system needs to have a number of command entries installed into the ““crontab” file (e.g. the “/var/spool/cron/crontabs/adm” file on many UNIX'es but this will change from system to system). The cron file of the adm user is configured to own the whole of the accounting files and processes. These commands have been designed to be run using cron in a batch mode. It is still possible to execute these commands manually from a command line or script.
ckpacct Controls the size of the /var/adm/pacct file. When the /var/adm/pacct file grows larger than a specified number of blocks (default = 1000 blocks), it turns off accounting and moves the file off to a location equal to /var/adm/pacctx (x is the number of the file). Then ckpacct creates a new /var/adm/pacct for statistic storage. When the amount of free space on the filesystem falls below a designated threshold (default = 500 blocks), ckpacct automatically turns off process accounting. Once the free space exceeds the threshold, ckpacct restarts process accounting.
dodisk Dodisk produces disk usage accounting records by using the diskusg, acctdusg, and acctdisk commands. By default, dodisk creates disk accounting records on the special files. These special filenames are usually maintained in “/etc/fstab” or “/etc/filesystems”.
monacct Uses the daily reports created by the commands above to produce monthly summary reports.
runacct Maintains the daily accounting procedures. This command works with the acctmerg command to produce the daily summary report files sorted by user name.
sa1 System accounting data is collected and maintained in binary format in the file /var/adm/sa/sa{dd}, where {dd} is the day of the month.
sa2 The sa2 command removes reports from the “…/sa/sa{dd}” file that have been there over a week. It is also used to write a daily summary report of system activity to the “…/sa/sa{dd}” file.

System Accounting Commands that can be Run Automatically or Manually

The following system accounting commands may be run on either from the command line or in an automated startup script:
startup When added to the /etc/rc*.d directories, the startup command initiates startup procedures for the accounting system.
shutacct Collects entries associated with when time accounting has been turned off by calling the acctwtmp command to write a line to the wtmp file. It then calls the turnacct off command to turn off process accounting.

Note

A number of system five UNIX varieties require that the “/etc/rc” files are edited to enable the system accounting run configuration.

Manually Executed Commands

Manually executed commands are designed to be run from the command line. These commands provide various functions, as described in the following list:
ac Prints connect-time records.
acctcom Displays process accounting summaries. (this file may generally be accessed by all users).
acctcon1 Displays connect-time summaries.
accton Turns process accounting on and off.
chargefee Charges the user a predetermined fee for units of work performed. The charges are added to the daily report by the acctmerg command.
fwtmp Converts files between binary and ASCII formats.
last Displays information about previous logins.
lastcomm Displays information about the last commands that were executed.
lastlogin Displays the time each user last logged in.
prctmp Displays session records.
prtacct Displays total accounting files.
sa Summarizes raw accounting information to help manage large volumes of accounting information.
sadc Reports on various local system actions, such as buffer usage, disk and tape I/O activity, TTY device activity counters, and file access counters.
time Prints real time, user time, and system time required to execute a command.
timex Reports in seconds the elapsed time, user time, and execution time.
sar Writes to standard output the contents of selected cumulative activity counters in the operating system. The sar command reports only on local system actions.

File System Access Control

UNIX file level access controls are both simple and complex. Granting permissions to individual users or in small groups is simple. Difficulties may arise in cases where a system has to provide access to a large number of users or groups. In this situation it is possible for groups to grow in number exponentially. UNIX file permissions are defined for:
▪ Owner
▪ Group
▪ World
The owner relates to an individual user. Restrictions on the owner associate file access with an individual. Group access provides the ability to set access restrictions across user groups. UNIX provides a group file (usually “/etc/group”) that contains a list of group memberships. Alternative applications have been developed for larger systems due to the difficulties associated with maintaining large numbers of group associations in a flat file database. The world designation is in effect equivalent to the Windows notion of everybody. Figure 17.3 diagrams UNIX file permissions.
B9781597492669000175/gr3.jpg is missing
Figure 17.3
UNIX File Permissions
UNIX has three main permissions: read, write, and execute. In addition there are a number of special permissions that we will discuss in this section. The read permission provides the capability to read a file or list the contents of a directory. The write permission provides the capability to edit a file, or add or delete a directory entry. The execute permission provides the capability to execute or run an executable file.
UNIX also provides for a special capability with the setting of a “sticky bit”. The “sticky bit” protects the files within a public directory that users are required to write to (for example, the “/tmp” directory). This protection is provided through stopping users from having the capability to delete files that belong to other uses which have been created in this public directory. In directories where the “sticky bit” has been set, only the owner of the file, owner of the directory, or the root user has the permissions to delete a file.
The UNIX file permissions are: “r, w, x, t, s, S”. The following example demonstrates the octal format for “r” or read, “w” or write, and “x” or execute.
1--xexecute
2-w-write
3-wxwrite and execute
4r--read
5r-xread and execute
6rw-read and write
7rwxread, write and execute
The first character listed when using symbolic notations to display the file attributes (such as from the output of the “ls -l” command) indicates the file type:
-denote a regular file
bdenotes a block special file
cdenotes a character special file
ddenotes a directory
ldenotes a symbolic link
pdenotes a named pipe
sdenotes a domain socket
The three additional permissions mentioned in the preceding section indicated by changing one of the three “execute” attributes (this is the execute attribute for user, group or world). Table 17.3 details the various special setuid and setgid permissions. There is a difference between whether the file special permission is set on an executable or non-executable file. Figure 17.4 diagrams additional UNIX file permissions.
Table 17.3 setuid and setgid Permissions
PermissionClassExecutable filesNonexecutable files
Set User ID (setuid)UsersS
Set Group ID (setgid)GroupsS
Sticky bitWorldtT
B9781597492669000175/gr4.jpg is missing
Figure 17.4
Additional UNIX File Permissions
The following examples provide an insight into symbolic notation:
▪ -rwx r-x r-- This permission is associated with a regular file whose user class or owner has full permissions to run, read and write the file. The group has the permissions to read and execute the file. And the world or everyone on the system is allowed to only read the file.
crw-r--r-- The symbolic notation here it is associated with a character special file whose user or owner class has both the read and write permissions. The other classes (group and world) only have the read permission.
dr-x------ This symbolic notation is associated with a directory whose user or owner class has read and execute permissions. The group and world classes have no permissions.

User-Level Access

The UNIX file system commonly distinguishes three classifications of users:
▪ Root (or as the account is also called super-user)
▪ Uses with some privilege level, and
▪ All other users
The previous section on access controls showed us how UNIX privileges and the access to files may be granted with access control lists (ACLs). The simplicity of the UNIX privilege system can make it extremely difficult to configure privileges in UNIX. Conversely it also makes them relatively simple to audit. The UNIX directory command, “ls –al” supplies the means to list all files and their attributes. The biggest advantage for an auditor is the capability to use scripting to capture the same information without having to actually visit the host. A baseline audit process may be created using tailored scripts that the audit team can save to a CD or DVD with statically linked binaries. Each time there is a requirement for an audit, the same process can be run. The benefits of this method are twofold. First, subsequent audits require less effort. Next, results of the audit can be compared over time. The initial order can be construed as a baseline and the results compared to future audits to both verify the integrity of the system and to monitor improvements. A further benefit of this method is that a comparison may be run from the tools on the system against the results derived from the tools on the disk.
Generally, it would be expected that no variation would result from the execution of either version of the tools. In the event that a Trojan or root kit found its way onto the server, the addition of a simple “diff” command would be invaluable. In the event that the diff command returned no output, it would be likely that no Trojan was on the system (excepting kernel and lower level software). If on the other hand there was a variation in the results, one would instantly know that something was wrong with the system.
The primary benefit of any audit control that may be scripted is that it also may be automated. The creation of such a script and the association for a predetermined configuration file for the script economizes the auditors of valuable time allowing them to cover a wider range of systems and provide a more effective service. The selection of what to audit for on a file system will vary from site to site. There are a number of common configuration files associated with each version of UNIX and also a number of files and directories common to any organization. The integration of best practice tools such as those provided (see the Appendixes for further details) by the Centre for Internet Security, SANS, NIST and the US Department of Defense provide a suitable baseline for the creation of an individual system audit checklist.

Special Permissions That Are Set for a File or Directory on the Whole, Not by a Class

The set user ID, setuid, or SUID permission

When a file for which this permission has been set is executed, the resulting process will presuppose the effective user ID given to the user class.

The set group ID, setgid, or SGID permission

When a file for which this permission has been set is executed, the resulting process will presuppose the group ID given to the group class. When setgid is applied to a directory, new files and directories created under that directory will inherit the group from that directory. The default behavior is to use the primary group of the effective user when setting the group of new files and directories.

The sticky permission

The characteristic behavior of the sticky bit on executable files allows the kernel to preserve the resulting process image beyond termination. When this is set on a directory, the sticky permission stops users from renaming, moving or deleting contained files owned by users other than themselves, even if they have write permission to the directory. Only the directory owner and superuser are exempt from this.

UNIX command is for file permissions

Chmod

The chmod command is used to modify permissions on a file or directory. Recommend supports both character notation (e.g. “chmod o+x file”) and octal notation (as was discussed above).

lsor the List command

The ls command displays either file tributes directory contents or directory contents. There are many options associated with this command that the auditor should become familiar with. Some of the main options include:
ls –a this command option will display all files, even hidden files.
ls –l this option provides “verbose” or extended information.
ls –r this option allows the command to display information using a reverse sort order.
ls –t this option will sort the output using the timestamp.

“cat” or Concatenate

The “cat” command is similar to the “type” command on Microsoft windows. This command is generally used to output or view the contents of the file. The Command can also be used to join or concatenate multiple files together.

“man” the UNIX online Manual

The “man” command may be used to view information or help files concerning the majority of UNIX commands. It is possible to conduct keyword searches if you are unsure of a command for a particular type of UNIX. Keyword search in “man” is provided by “apropos”.

Usernames, UIDS, the Superuser

Root is almost always connected with the global privilege level. In some extraordinary cases (such as special UNIX'es running Mandatory Access Controls) this is not true, but these are rare. The super-user or “root” account (designated universally as UID “0”) includes the capacity to do practically anything on a UNIX system. RBAC (role-based access control) can be implemented to provide for the delegation of administrative tasks (and tools such as “SUDO” or super-user do also provide this capability). RBAC provides the ability to create roles. Roles, if configured correctly, greatly limit the need to use the root user privilege. RBAC both limits the use of the “su” command and the number of users who have access to the root account. Tools such as SUDO successfully provide similar types of control, but RBAC is more granular than tools such as SUDO allowing for a far greater number of roles on any individual server. It will come down to the individual situation within any organization as to which particular solution is best.

Blocking Accounts, Expiration, etc.

The password properties in the in /etc/shadow file contain information about password expiration in a number of fields. The exact method for enabling or disabling account expiration will vary between different UNIX varieties and the auditor needs to become familiar with the system that they are to audit. As was mentioned above there are a number of ways to restrict access to accounts. It is necessary to create a checklist that takes into account the individual differences that occurred across sites.
The “chage” command changes the number of days between password changes and the date of the last password change. Information contained within the shadow file is used by the system to determine when the user must change their password. This information is valuable to the auditor as well and the list of current users with the times that they last access the system and change their passwords is an essential part of any UNIX audit.
The command “chage -l vivek” (which again will vary slightly by system) may be used to least current ageing on the existing accounts of the system. An example of the output provided from this command is detailed below.
Last password change:Mar 20, 2008
Password expires:May 20, 2008
Password inactive:never
Account expires:never
Minimum number of days between password change:3
Maximum number of days between password change:90
Number of days of warning before password expires:14

Restricting Superuser Access

Root access should be restricted to secured terminals. As was noted above, there are a variety of options and applications for which route may access the system. The auditor needs to gain a level of familiarity with the system such that they can check all logical routes that may be used to authenticate and authorize the user. Some of the more common avenues used to authenticate or UNIX system include:
▪ telnet (and other TTY based methods)
▪ SSH
▪ X-Window
▪ Local terminals and serial connections.

Disabling .rhosts

It is important to ensure that that no user (not even – and in fact especially not root) has a “.rhosts” file in their home directory. The “.rhosts” is one of the biggest security risks and is in fact a greater risk than “/etc/hosts.equiv” file although they have the same functional purpose. The problem with “.rhosts” files is that they can be created by each user on the system. Some services, such as running unattended backups over a network try to use these files. However, it should be avoided.
The auditor should check that there are cron processes implemented to periodically check for and report the contents of any of these files. Ideally the same process should delete the contents of any $HOME/.rhosts files it finds replacing them with a blank or empty file owned by root that can only be accessed or written by root (that is 400 permissions). Let your users know that you will regularly perform an audit of this type and include it in the standard processes.
Of particular concern are files of this type that have the symbol “-” as the first character in this file, or the symbol “+” on any line, as these may allow users access to the system. Ensure that blank files are created in any uses home directory with permissions set to either 400 or 600. It is further recommended that any site use the logdaemon to restrict the use of $HOME/.rhosts.

Additional Security Configuration

There are a number of additional security steps that should be taken to lock down the system. UNIX has a variety of good access control tools that allow the creation of intrusion detection and firewalling capabilities at the host level. For the most part, these tools are either distributed with the operating system or available freely. The exact nature and precise availability of these tools will vary from system to system. The final section of this chapter concerning the development of a checklist will provide details on where to access information specific to a number of UNIX varieties which will greatly aid the auditor and the creation of a security configuration checklist.

Network Access Control

Whether it is running through inetd or the integrated version in xinetd, it is essential that TCPwrappers is installed and running on any UNIX system. By themselves, the majority of UNIX services have very limited (if indeed any) logging capabilities. TCPwrappers (tcpd) creates a process that encapsulates network services. It does this in order to both add a layer of logging and to add a filtering capability. This allows it to selectively accept or reject connections based on a predefined set of ACLs. These ACLs may be set to be the block or allow selected hosts, entire domains and to even check for IP address spoofing.
As stated above, TCPwrappers is built into xinetd but needs to be added to inetd. To provide the same level of functionality in inetd as can be included in xinetd it is necessary to add an entry into “/etc/inetd.conf” to allow TCPwrappers to function. Such an entry is included below:
telnet stream tcp nowait root /usr/sbin/in.telnetd
If the application uses TCP wrappers, tcpd is the first to be initiated and it then calls the network service. If the user is authorized to access it is:
telnet stream tcp nowait root /usr/sbin/tcpd in.telnetd
On modifying “/etc/inetd.conf” it is essential to restart inetd by sending it a kill -HUP signal.

Use tcpd to limit access to your machine

TCPwrappers works in conjunction with the “/etc/hosts.allow” and “/etc/hosts.deny” files in order to restrict access to specific network services. The configuration of these services is provided through separate allow or deny statements. Together, this allows for an extremely granular set of access control lists. As an example, the subsequent configurations file denies access to everyone (in “/etc/hosts.deny”) creating in effect a default deny rule. In the succeeding “/etc/hosts.allow” configuration file, access to chosen trusted hosts is selectively allowed.
/etc/hosts.deny
# hosts.denyThis file describes the names of the hosts which are
# *not* allowed to use the local INET services, as decided
# by the ‘/usr/sbin/tcpd’ server.
ALL: ALL
/etc/hosts.allow
# hosts.allowThis file describes the names of the hosts which are
# allowed to use the local INET services, as decided
# by the ‘/usr/sbin/tcpd’ server.
# allow access to local machines
#
ALL: localhost, .farm.ridges-estate.com
# Other trusted systems - anyone farm domain at Bagnoo
# except for my when my systems are visiting
#
ALL: .guest.ridges-estate.com EXCEPT sister.ridges-estate.com
# allow FTP access to anyone inside the Bagnoo Test Networks
#
in.ftpd: .test.ridges-estate.com
Several other UNIX network applications and services have the ability to restrict access. The Apache web server can use the “access.conf” to restrict access at the directory-level to hosts and domains. Further, SSH incorporates the ability to restrict access to selective hosts and address ranges in the server configuration file.

Use ssh instead of telnet, rlogin, rsh and rcp

Secure Shell (ssh) is a “program to log into another computer over a network, to execute commands in a remote machine, and to move files from one machine to another. It provides strong authentication and secure communications over insecure channels”. (Fsecure)
Ssh should always be used to replace all the first-generation tools such as telnet, rexec, rlogin and rcp. This is even more critical in insecure environments such as when used across the Internet as it is possible that an attacker could be eavesdropping on the network with packet sniffers. Older style protocols such as Telnet send authentication information and subsequent communications in clear text. Not only is there a problem with packet sniffing, but an attacker could also hijack the session.
Ssh provides the capability to offer public key encrypted tunnels that provide protection against packet sniffing and hijacked connections. These tunnels may be used to encapsulate other protocols allowing for the provision of secure X11 sessions and the capability to redirect TCP/IP ports. As such, other TCP/IP traffic may be encrypted through the introduction of an Ssh –based tunnel. There are both open source and commercial Ssh clients and server software for UNIX. In addition, Windows and Macintosh clients also exist in both the commercial and open source realms.

Network Profiling

It is essential to identify network services running on a UNIX host as a part of any audit. To do this, the auditor needs to understand the relationship between active network services, local services running on the host and be able to identify network behavior that occurs as a result of this interaction. There are a number of tools available for any UNIX system that the auditor needs to be familiar with.

Netstat

Netstat lists all active connections as well as the ports where processes are listening for connections. The command, “netstat -p -a --inet” (or the equivalent on other UNIX'es) will print a listing of this information. Not all UNIX versions support the “netstat –p” option for netstat. In this case other tools may be used.

Lsof

The command, “lsof” allows the auditor to list all open files where “An open file may be a regular file, a directory, a block special file, a character special file, an executing text reference, a library, or a stream or network file”.

Ps

The command, “ps” reports a snapshot of the current processes running on UNIX host. Some examples from the “ps” man page of one UNIX system are listed below.
To see every process on the system using standard syntax:
ps -e
ps -ef
ps -eF
ps -ely
To see every process on the system using BSD syntax:
ps ax
ps axu
To print a process tree:
ps -ejH
ps axjf
To get info about threads:
ps -eLf
ps axms
To get security info:
ps -eo euser, ruser, suser, fuser, f, comm, label
ps axZ
ps -eM
To see every process running as root (real & effective ID) in user format:
ps -U root -u root u
To see every process with a user-defined format:
ps -eo pid, tid, class, rtprio, ni, pri, psr, pcpu, stat, wchan:14, comm
ps axo stat, euid, ruid, tty, tpgid, sess, pgrp, ppid, pid, pcpu, comm
ps -eopid, tt, user, fname, tmout, f, wchan
Print only the process IDs of syslogd:
ps -C syslogd -o pid=
Print only the name of PID 42:
ps -p 42 -o comm=

Top

The command, “top” is distributed with many varieties of UNIX. It is also available from www.unixtop.org/. The top command provides continual reports about the state of the system, including a list of the top CPU using processes. This command gives much of the information found in the Microsoft Windows Task Manager. The main functions of the program as stated by the developers are to:
provide an accurate snapshot of the system and process stat,
not be one of the top processes itsel,
be as portable as possible

Kernel Tuning for Security

The UNIX kernel has many configurable parameters that are security related. These parameters can be adjusted to strengthen the security posture of a system covering aspects such as ARP timeouts, IP forwarding of packets, IP source routing of packets, TCP connection queue sizes, and many other factors controlling network connections. Correct tuning of the kernel will even significantly reduce OS fingerprinting of the system when an attacker is using tools as queso and nmap.
Most modern UNIX systems have introduced the concept of the “/proc” file tree. This allows administrators to access the process space and kernel through the file system. The “/proc/<PID>/cwd” may be accessed if you know the identity of a process. Each of the directories in the “/proc” file tree is associated with the PID (process ID) of a running process. The command, “lsof” (list open files) may be used to identify hidden file space and report on which process is accessing any open file. There are times when it is possible to access a file through the proc file system after it has been deleted.
Each variety of UNIX will have its own kernel parameters. It is important that the auditor investigates these prior to the audit and creates a list of customized settings to check. As an example we will look at some of the settings in a Solaris UNIX system.

Solaris Kernel Tools

The tool provided within Solaris UNIX for tuning kernel parameters is the command “ndd”. This is far more limited with respect to kernel tuning as Solaris “ndd” only supports the TCP/IP kernel drivers. This tool is valuable to the auditor as it can be used not only to set the values of parameters for these drivers, but also to display the current configuration.

Solaris Kernel Parameters

The standard “ndd” command format is:
ndd /dev/<driver> <parameter>
In the command format, the parameter <driver> may be: ARP, IP, TCP, or UDP. The command to view all parameters for a particular driver the command is:
ndd /dev/<driver> ?
The command used to set a kernel parameter using ndd is (although this is not something that an auditor will general use):
ndd -set /dev/<driver> <parameter> <value>
The primary difficulty with Solaris is that any changes to the kernel parameter values using ndd are not permanent and will return to default upon system reboot. These changes need to be put into a shell script that is run at system boot to be effective.

ARP

Address Resolution Protocol(ARP) is used to dynamically map layer-3 network addresses to data-link addresses. The ARP cache is vulnerable to ARP cache poisoning and ARP spoofing attacks. ARP cache poisoning involves the insertion of either a non-existent ARP address or an incorrect ARP address into a system's ARP cache. This results in a denial of service since the target system will send packets to the peer's IP address but the MAC address will be wrong.
ARP spoofing can be used by an attacker in order to attempt to compromise the system. ARP spoofing relies on disabling a host on the network so that it cannot reply to any ARP request broadcasts and then subsequently configuring the disabled host's IP address on the attacking host. When the host being attacked attempts to communicate with the disabled host the attacker's system responds to any ARP request broadcasts, thus inserting its MAC address in the attacked host's ARP cache. Communication between the two hosts can then proceed as usual. It is very tricky to protect a system against ARP attacks. A possible defense against ARP attacks is to reduce the lifetime of cache entries. The cache lifetime is determined in Solaris by the kernel parameter “arp_cleanup_interval.” The IP routing table entry lifetime is set by the kernel parameter “ip_ire_flush_interval”. These commands will be set as follows:
ndd -set /dev/arp arp_cleanup_interval <time>
ndd -set /dev/ip ip_ire_flush_interval <time>
In the ndd command, <time> is added in milliseconds. Reducing the ARP cache timeout interval and the IP-routing table timeout interval can make it more difficult for the attacker slowing down their attack. Alternately, static ARP addresses should be created for secure trusted systems. Static ARP cache entries are permanent and therefore do not expire. These entries can be deleted using the command “arp –d”. This may be further enhanced in the event that only static ARP is necessary.

IP Parameters

The Solaris kernel also introduces the capability to modify various characteristics of the IP network protocol. This functionality is provided through the following parameters:
▪ ip_forwarding
▪ ip_strict_dst_multihoming
▪ ip_forward_directed_broadcasts
▪ ip_forward_src_routed
IP forwarding involves routing IP packets between two interfaces on the same system. Unless the system is an action router, IP forwarding should be disabled by setting the kernel parameter ip_forwarding to 0 as follows:
ndd -set /dev/ip ip_forwarding 0
Setting the parameter ip_strict_dst_multihoming to 0 lets the system drop any packets that seem to originate from a network attached to another interface such as a spoofed packet:
ndd -set /dev/ip ip_strict_dst_multihoming 0
Directed broadcasts are packets that are sent from one system on a foreign network to all systems on another network. Directed broadcasts are the basis for the “smurf” attack where forged ICMP packets are sent from a host to the broadcast address of a remote network. To disable the forwarding of directed broadcasts set ip_forward_directed_broadcasts to 0 as follows:
ndd -set /dev/ip ip_forward_directed_broadcasts 0
Source routing is a common attack used to bypass firewalls and other controls. Disallow IP-forwarding to silently drop source-routed packets by setting the Solaris kernel parameter ip_forward_src_routed to 0 as follows:
ndd -set /dev/ip ip_forward_src_routed 0

TCP Parameters

SYN flooding is a common denial of service used against many operating systems. The Solaris kernel commands both provide some protection against SYN flooding and the ability to determine if a Solaris system is under a TCP SYN flood attack by monitoring the number of TCP connections in a SYN_RCVD state as follows:
netstat -an -f inet | grep SYN_RCVD | wc -l
This is where having a system baseline becomes invaluable as it is then possible to compare the values taken when the machine is running under normal circumstances against those when you believe you are being attacked. Solaris also provides the capability to determine if the system is undergoing a SYN attack using the following command:
netstat -s -P tcp
The output of this command will provide the tcpTimRetransDrop and tcpListenDrop parameters. An experienced system administrator should be able to recognize a SYN attack using these values. The value tcpTimRetransDrop displays the number of aborts since boot time due to abort time expirations. This value includes both the SYN requests as well as established TCP connections. The value tcpListenDrop displays the number of SYN requests that have been refused since the system was booted due to a TCP queue backlog. It is likely that the system is experiencing a SYN attack in the event that the tcpListenDrop value increases quickly along with the value of tcpTimRetransDrop.
It is possible to defend against this type of attack by shortening the value of the abort timer, and lengthen the TCP connection queue. Both of these may be done through the kernel parameters. To decrease the abort timer, the kernel parameter, “tcp_ip_abort_cinterval” is used where the value is supplied to the command in milliseconds. The default the abort timer interval is set at 180 seconds. In order to decrease the abort time to 30 seconds the following command may be used:
ndd -set /dev/tcp tcp_ip_abort_cinterval 30000
The kernel parameter tcp_conn_req_max_q0 controls the queue size for TCP connections that have not been established. The default value for tcp_conn_req_max_q0 is set at 1024 queue connections and may be increased using following command:
ndd -set /dev/tcp tcp_conn_req_max_q0 4096
Another type of DoS attack involving the SYN flag is based on an attacker exhausting the TCP established connection queue. The TCP connection queue control is given by the kernel parameter tcp_conn_req_max_q which is set by default at 128. An example command to increase the established TCP connection queue would be:
ndd -set /dev/tcptcp_conn_req_max_q<size>

Security for the cron System

Depending on the version of UNIX, the cron daemon can live in a variety of directories such as “/var/spool/cron” or “/var/cron”. Crontab entries are run at periodic instances by the cron daemon, “crond”. Schedules may be configured to run across a variety of periods based on:
▪ Month
▪ Week
▪ Day
▪ Hour
▪ Minute
There are a number of reasons why cron is of particular interest to an auditor. First, a number of tasks may be automated. The creation of a set of audits scripts allows the auditor to have validation scripts run which send information at preset times. These scripts can be configured to load into a database and validate any changes to the system. Any variation from the baseline or from the previous audit results creates an automated change alerting system and helps to maintain the integrity of the system. Systems administrators may also use such a system to monitor key attributes such as memory use and disk capacity.
Next, the security of both the crontab itself and the scripts it calls are of paramount concern. If either the crontab process or any of the scripts that it calls are compromised, the entire system is at risk. Many system administrators understand the need to protect the cron daemon that do not understand the need to protect the files that cron calls. When you think about it however, the matter becomes clear. Cron runs scripts and applications generally as a privileged user. In many cases this can be as “adm” or even root. If for instance an attacker modifies a secure script they could have run a process to escalate their privileges or even install a root kit. It is not uncommon to see installations where cron files are calling scripts that have the permission of “777” associated to them. This in effect would allow any user on the system to change the script.
One of the tasks that an auditor of a UNIX system must do is ensure that all applications and scripts listed in a crontab file are restricted such that only the owner can write to or modify them.

Backups and Archives

It is inevitable that something will eventually go wrong. There is no difference to the statement when you consider UNIX, Windows or some other operating system. Consequently there needs to be some means of ensuring that the data on the system and even the system itself may be recovered. One of the roles of the auditor is to ensure that processes are in place that will lead to this end. There are a number of ways to ensure that a UNIX system is adequately backed up and archived including both commercial options and those that come with the system. We will discuss only those tools that come with UNIX for the time being.

tar, dump, and dd

The “tar”, “dump” and “dd” commands provide the auditor with a simple means of collating files to either get them to or from the UNIX system being tested.

tar

The tar command is short for tape archiving, the storing of entire file systems onto magnetic tape, which was the origin of the command. However, the command has become a tool to simply combine a few files into a single file allowing for straightforward storage and distribution of backups, archives and even applications.
The process used to combine multiple files (and remember, directories are also files in UNIX) into a single file is supplied by the command:
tar -cvf destination_file.tar input_file_1input_file_2
The “f” parameter lets “tar” know that you want to create a tape archive and not to just concatenate a number of files. The “v” parameter places “tar” into verbose mode which reports all files as they are added.
The command to split an archive created by tar into separate files, at the shell prompt is:
tar -xvf file.tar

Compressing and uncompressing tar images

Many UNIX varieties use GNU tar which also allows the use of gzip (the GNU file compression program) in conjunction with tar to create compressed archives. The command to create a compressed tar archive is:
tar -cvzfdestination_file.tar.gzinput_file_1input_file_2
The “z” parameter instructs tar to gzip the archive as it is formed.
To unzip a gzipped tar file, the command would be:
tar -xvzffile.tar.gz
Where a UNIX system does not support GNU tar, gzip may be installed to create a compressed tar file. The following command provides this capability:
tar -cvf – input_file_1input_file_2 | /usr/bin/gzip > destination_file.tar.gz
Alternatively, the UNIX compress command may be used instead of gzip. To do this just replace the “gzip” command with the “compress” command and change the “.gz” extension to“ .Z”. Though the extensions do not make a difference to the UNIX system, this is a common designation for the compress command which is set by default to specifically look for an uppercase Z. To divide a “tar” archive that was created and compressed through the use of “gzip”, use the following command:
/usr/bin/gunzip –c file.tar.gz | tar -xvf –
Likewise you would divide a tar archive that was compressed using the UNIX compress command by replacing “gunzip” with the “uncompress” command.
UNIX does not generally care about extensions in the manner that other operating systems do. However, it is good form to use the right ones so as to not confuse people and run the wrong commands. The extensions “.tgz” and “.tar.gz” are equal to each other and each signifies a tar file zipped with gzip.

dump

The UNIX man pages tell us that “The dump utility is best suited for use in shell scripts, whereas the elfdump(1) command is recommended for more human-readable output.”
This command is an effective means of backing up a UNIX system. See the man pages for the version of UNIX being reviewed for details.

dd

The command “dd” is a widespread UNIX command with the primary purpose of providing low-level (actually bit level) copying and conversion of raw data. “dd” is an abbreviation for “data definition”. This command is used in digital forensics as it can create a “byte-exact” copy of a file, drive sector or even an entire drive (even the deleted files).
Some people have termed “dd” with the name “destroy disk” and “delete data” due to its capability to also write an image back to a drive. This provides the capability to both recover a drive (by restoring an image to another disk) or to “wipe” a disk. The wipe process is accomplished by sending either random data (/dev/random) or zeros (/dev/null) to the drive through “dd”. This is mainly a concern to an auditor in case of forensic audits for investigations or in ensuring that the system administrators are correctly destroying drives that are destined to leave the organization. Many regulations and standards (such as HIPAA and PCI-DSS) require a process to ensure that data has been cleansed. Using “dd” can achieve this.

Tricks and Techniques

Try having the system administrator create an “emergency boot disk” with these commands. Further, the ability of “dd” to create images allows the auditor to conduct intense tests of a system without impacting production. By using a virtual machine (such as VMware), the auditor can take and test an image off-line. This is particularly useful in situations where DoS and “dangerous” tests may not be run. Additionally, it may be possible to test the system without the critical data if this is an issue.

Auditing to Create a Secure Configuration

When auditing an unknown system, there is always a question as to the integrity of the tools. If a rogue administrator or attacker has gotten to the host first they could have installed a rootkit, Trojans or otherwise compromised the host. The end result of this is that the auditor cannot trust the local tools in the system. There are exceptions to this, for instance if the tools have been stored on read-only media or if there is a valid trusted source that can be verified. If for instance, the server has a trusted hash database data containing data that may be validated using a tool such as tripwire. In this instance it would still be necessary to ensure the integrity of the tripwire binary and database.
To solve this dilemma, there are a number of Linux binary distributions that are freely available. KNOPPIX provides one such solution (Knoppix may be found at www.knoppix.org). Additionally, there are a number of distributions on Knoppix that have already been created. A few of these are listed below:

Local Area Security

L.A.S. is a research group focused on information security related subjects who have created L.A.S. Linux. This is a live-CD security toolkit. This is available from http://localareasecurity.com/

WarLinux

A Linux distribution designed for Wardriving. It is available on disk and bootable CD. The primary intended use is for auditors that seek to audit and evaluate a wireless network installation.

Auditor/BackTrack

The Auditor Security Collection has been renamed as BackTrack. This is a Linux distribution distributed as a LiveDistro that results from the merger of Slax-based WHAX and Kanotix-based Auditor Security Collection. With no installation whatsoever, the analysis platform is started directly from the CD-Rom and is fully accessible within minutes. Independent of the hardware in use, the Auditor Security Collection offers a standardized working environment, so that the build-up of know-how and remote support is made easier.

Elive

Elive is a LiveCD based on Debian Linux that works with Enlightenment like only desktop containing all EFL libraries required to launch applications related to EFL and it possible to use it for programming anywhere and that don't need to be installed. Elive includes a big part of the programs related to Enlightenment and programmed on it libraries.

Arudius

Arudius is a Linux live CD with tools for information assurance (such as penetration testing, vulnerability analysis, and audit). It is based on Slackware (Zenwalk) for i386 systems and targets the information security field. It is released under the GNU GPL and contains only open-source software.

Building Your Own Auditing Toolkit

There are many Linux distributions readily available for testing. This however should not stop you from creating your own version of a UNIX test disc. Whether you are on Solaris, HP-UX or any other variety of UNIX it is simple to create an audit CD that can go between systems. The added benefit of this method is that the audit tools do not need to be left on the production server. This in itself could be a security risk and the ability to unmount the CD and take it with you increases security.
The ability to create a customized CD for your individual system means that the auditors can have their tools available for any UNIX system that they need to work with. It may also be possible to create a universal audit CD. Using statically linked binaries, a single DVD or CD could be created with separate directories for every UNIX variety in use in the organization that you are auditing. For instance, the same CD could contain a directory called “/Solaris” which would act as the base directory for all Solaris tools. Similarly, base directories for Linux (/Linux), HP-UX (/HPUX10, /HPUX9) and any other variety of UNIX in use in your organization could be included on the same distribution allowing you to take one disk with you but leaving you ready at all times.
The added benefit of creating your own disk is that you can update the tools any time you wish and add new ones. On top of this, those audit scripts that you have been creating may be all listed together in one place. If you are using a KNOPPIX distribution it will not have your audit scripts. These tools then become your trusted source of software. As was noted above, a script could be created that runs your trusted tool and also the tool on the host to verify that the results of the same. If there are any differences it is easy to note that the system may have been compromised. The added benefit of this distribution is that you can also use it for incident response and forensic work if required.
When creating your distribution you should include the following binaries and statically linked format where possible:
▪ “chown”, “chgrp”, “chmod”’
▪ “cp”, “cat” and “diff”
▪ “find”, “ls” and “ps”
▪ “dd”
▪ “df” and “du”
▪ “rm” and “mv”
▪ “netstat”, “lsof” and “top”
▪ Compression Applications including: “compress”, “uncompress”, “gzip”, “gunzip”, and “tar”
▪ Include “shared libraries” and “static system libraries”
▪ gdb, nm
▪ ps, ls, diff, su,
▪ passwd
▪ strace/ltrace
▪ MD5 or another has tool (preferably a number of these)
fdisk/cfdisk
▪ who, w, finger
▪ dig
▪ scripts
▪ gcc, ldd
▪ sh, csh
It is also advisable to include “losf”, and “gcc” as well as their related libraries.
Dynamically linked executables are commonly used due to space limits. As a large number of applications can use identical basic system libraries, these are rarely stored in the application itself. An attacker could still compromise these libraries.Treat all system libraries as being suspect and compile all tools using “gcc” set with the “-static” parameter. This will create a static binary or standalone executable. The ldd command can be used to demonstrate the dependency discovery process:
$ /cdrom/bin/ldd calc
libc.so.6 => /lib/libc.so.6 (0x40020000)
/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

About ldd

The command, ldd may be used to list dynamic dependencies of executable files or shared objects. The ldd command can also be used to examine shared libraries themselves, in order to follow a chain of shared library dependencies.
The pvs command may also be useful. This command displays the internal version information of dynamic objects within an ELF file. Commonly these files are dynamic executables and shared objects, and possibly reloadable objects. This version information can fall into one of the following two categories:
▪ Version definitions described the interface made available by an ELF file. Each version definition is associated to a set of global symbols provided by the file.
▪ Version dependencies describe the binding requirements of dynamic objects on the version definition of any shared object dependencies. When a dynamic object is built with a shared object, the link-editor records information within the dynamic object indicating that the shared object is a dependency.
For example, the command pvs -d /usr/lib/libelf.so.1 can be used to display version definition of the ELF file libelf.so.1.

Using the Distribution

To use your custom distribution, the first step involves mounting the CD or DVD as a file system. The next stage involves starting a “clean” shell and then setting the application search paths and library load paths. If you don't do this and you forget (or do not use) the complete directory listing when calling an application (for example calling “/bin/sh” against typing “sh” to start a shell), you cannot take reliance as to the security and integrity of the binaries and libraries being called. An example of this process is listed below:
# mount -t isO9660 -o ro /dev/cdrom /mnt/cdrom
# /mnt/cdrom/bin/ksh
# PATH=“/mnt/cdrom/bin: /mnt/cdrom/sbin:$PATH”
# LDLIBARARYPATH=“/mnt/cdrom/lib:$LDLIBRARYPATH”
# exportPATH
# exportLDLIBRARYPATH
When mounting the CD or DVD also ensure that you have called the device and not just assumed that this is set up correctly. It is possible that a rootkit could intercept mount function calls. Although an attacker could still bypass, this methodology is much more difficult.

File Integrity Assessment

Ensuring the integrity of a file system, individual file or other data is essential in ensuring the reliability and correctness of the system. Any system needs to be able to process data in a predictable and expected manner such that it can ensure the correctness of data while securely processing input and retrieving data for output. Data integrity can only be maintained through a process of ensuring that the complete UNIX environment is adequately protected. This includes system hardware, software, applications and services in the data from both an input and output stream perspective.

Hardware Integrity

Ensuring that the hardware integrity is maintained at an acceptable level requires that the physical environment is adequately secured. This involves controlling access to hardware resources. Theft, damage to resources and unauthorized access to data are all likely occurrences if the hardware has not been protected from physical intrusions.
Local access to any UNIX server will enable an attacker to gain local access to the system with little effort. Consequently, an attacker could gain access to the super user account and potentially install a backdoor rootkit onto the system. Attackers can bypass physical security in a number of ways including:
▪ booting a server and a single user mode
▪ accessing memory via FireWire
▪ being able to image a disk drive
▪ capturing information over shared media or by using network sniffer
Additionally, it is good practice to limit access to selected physical locations. If an attacker is able to gain access to these locations, meant escalation to superuser would be easier.

Operating System Integrity

Maintaining the integrity of the UNIX operating system requires frequent patching, inspection of installed programs and security measures such as limiting access to network ports. To ensure that the operating system has not been compromised it is essential that an integrity program such as AIDES or Tripwire is regularly run over the executables stored on the system. It is important to ensure that the integrity database is maintained offline. If an attacker is able to access these files it is possible that they could change them rendering the interior method useless.
Integrity check tools work by maintaining a hash database of the various files. Another way of doing this would be to automate checks using a read-only source that can be mounted periodically and set to send out alerts if any files have changed without authorisation. One of the key controls necessary in maintaining system integrity is the use of secured and trusted time sources.

Data Integrity

There are a number of standards that help in providing the concepts necessary for evaluating a site's data integrity requirements. A number of these will be provided in the section at the end of this chapter covering the creation of a checklist for UNIX. It is not possible to cover the level integrity in detail outside an individual organization as a requirement will vary not only from site to site but from server to server and over time. The creation and maintenance of and effective process is necessary to ensure the continued maintenance of a site's data integrity.
In making a checklist investigate the various techniques that may be used to help ensure security of the data residing on a UNIX system.

Finer Points of Find

The UNIX “find” command is probably one of the auditor's best friends on any UNIX system. This command allows the auditor to process a set of files and/or directories in a file subtree. In particular, the command has the capability to search based on the following parameters:
▪ where to search (which pathname and the subtree)
▪ what category of file to search for (use “-type” to select directories, data files, links)
▪ how to process the files (use “-exec” to run a process against a selected file)
▪ the name of the file(s) (the “-name” parameter)
▪ perform logical operations on selections (the “-o” and “-a” parameters)
One of the key problems associated with the “find” command is that it can be difficult to use. Many experienced professionals with years of hands-on experience on UNIX systems still find this command to be tricky. Adding to this confusion are the differences between UNIX operating systems. The find command provides a complex subtree traversal capability. This includes the ability to traverse excluded directory tree branches and also to select files and directories with regular expressions. As such, the specific types of file system searched with his command may be selected.
The find utility is designed for the purpose of searching files using directory information. This is in effect also the purpose of the “ls” command but find goes far further. This is where the difficulty comes into play. Find is not typical UNIX command with a large number of parameters, but is rather a miniature language in its own right.
The first option in find consists of setting the starting point or subtrees under which the find process will search. Unlike many commands, find allows multiple points to be set and reads each initial option before the first “-” character. This is, the one command may be used to search multiple directories on a single search. The paper, “Advanced techniques for using the UNIX find command” by B. Zimmerly provides an ideal introduction into the more advanced features of this command and is highly recommended that any auditor become familiar with this. This section of the chapter is based on much of his work.
The complete language of find is extremely detailed consisting of numerous separate predicates and options. GNU find is a superset of the POSIX version and actually contains an even more detailed language structure. This difference will only be used within complex scripts as it is highly unlikely that this level of complexity would be effectively used interactively:
▪ -name True if pattern matches the current file name. Simple regex (shell regex) may be used. A backslash () is used as an escape character within the pattern. The pattern should be escaped or quoted. If you need to include parts of the path in the pattern in GNU find you should use predicate “wholename”.
▪ “-(a,c,m)time” as possible search may file is last “access time”, “file status” and “modification time”, measured in days or minutes. This is done using the time interval in parameters -ctime, -mtime and -atime. These values are either positive or negative integers.
▪ -fstype type True if the filesystem to which the file belongs is of type type. For example on Solaris mounted local filesystems have type ufs (Solaris 10 added zfs). For AIX local filesystem is jfs or jfs2 (journalled file system). If you want to traverse NFS filesystems you can use nfs (network file system). If you want to avoid traversing network and special filesystems you should use predicate local and in certain circumstances mount.
▪ “-local” This option is true where the file system type is not a remote file system type.
▪ “-mount” This option restricts the search to the file system containing the directory specified. The option does not list mount points to other file systems.
▪ “-newer/-anewer/-cnewer baseline” The time of modification, access time or creation time are compared with the same timestamp in the file used as a baseline.
▪ “-perm permissions” Locates files with certain permission settings. This is an important command to use when searching for world-writable files or SUID files.
▪ “-regex regex” The GNU version of find allows for file name matches using regular expressions. This is a match on the whole pathname, not a filename. The “-iregex” option provides the means to ignore case.
“-user”This option locates files that have specified ownership. The option “–nouser” locates files without ownership. In the case where there is no user in “/etc/passwd” this search option will find matches to a file's numeric user ID (UID). Files are often created in this way when extracted from a tar archive.
▪ “-group” This option locates files that are owned by specified group. The option, “-nogroup” is used to refer to searches where the desired result relates to no group that matches the file's numeric group ID (GID) of the file.
▪ “-xattr” This is a logical function that returns true if the file has extended attributes.
▪ “-xdev” Same as the parameter “-mount primary”. This option prevents the find command from traversing a file system different from the one specified by the Path parameter.
▪ “-size” This parameter is used to search for files with a specified size. The “-size” attribute allows the creation of a search that can specify how large (or small) the files should be to match. You can specify your size in kilobytes and optionally also use + or - to specify size greater than or less than specified argument. For instance:
▪ find /usr/home -name “*.txt” -size 4096k
▪ find /export/home -name “*.html” -size +100k
▪ find /usr/home -name “*.gif” -size -100k
▪ “-ls” list current file in “ls –dlis” format on standard output.
▪ “-type” Locates a certain type of file. The most typical options for -type are:
d A Directory
f A File
l A Link

Logical Operations

Searches using “find” may be created using multiple logical conditions connected using the logical operations (such as “AND”, “OR”). By default options are concatenated using AND. In order to have multiple search options connected using a logical “OR” the code is generally contained in brackets to ensure proper order of evaluation.
For instance (-perm -2000 -o -perm -4000 )
The symbol “!” is used to negate a condition (it means logical NOT). “NOT” should be specified with a backslash before exclamation point ( ! ).
For instance find ! -name “*.tgz” -exec gzip {} ;
The “( expression )” format is used in cases where there is a complex condition.
For instance find / -type f ( -perm -2000 -o -perm -4000 ) -exec /mnt/cdrom/bin/ls -al {} ;

Output Options

The find command can also perform a number of actions on the files or directories that are returned. Some possibilities are detailed below:
▪ “-print” The “print” option displays the names of the files on standard output. The output can also be piped to a script for post-processing. This is the default action.
▪ “-exec” The “exec” option executes the specified command. This option is most appropriate for executing moderately simple commands.
Find can execute one or more commands for each file it has returned using the “-exec” parameter. Unfortunately, one cannot simply enter the command.
For instance:
▪ find . -type d -exec ls -lad {} ;
▪ find . -type f -exec chmod 750 {} ‘;’
▪ find . -name “*rc.conf” -exec chmod o+r ‘{}’ ;
▪ find . -name core -ctime +7 -exec /bin/rm -f {} ;
▪ find /tmp -exec grep “search_string” ‘{}’ /dev/null ; -print
An alternative to the “-exec” parameter is to pipe the output into the “xargs” command. This section has only just touched on find and it is recommended that the auditor investigate this command further.

A Summary of the Find Command

Effective use of the find command can make any audit engagement much simpler. Some key points to consider when searching for files are detailed below:
▪ Consider where to search and what subtrees will be used in the command remembering that multiple piles may be selected.
▪ find /tmp /usr /bin /sbin /opt -name sar
▪ The find command allows for the ability to match a variety of criteria.
▪ -name search using the name of the file(s). This can be a simple regex
▪ -type what type of file to search for ( d -- directories, f -- files, l -- links).
▪ -fstype typallows for the capability to search a specific filesystem type.
▪ -mtime xFile was modified “x” days ago.
▪ -atime xFile was accessed “x” days ago.
▪ -ctime xFile was created “x” days ago.
▪ -size xFile is “x” 512-byte blocks big.
▪ -user userThe file's owner is “user”.
▪ -group groupThe file's group owner is “group”.
▪ -perm pThe file's access mode is “p” (as either an integer/symbolic expression).
▪ Think about what you will actually use the command for and consider the options available to either display the output or the sender to other commands for further processing.
▪ -print display pathname (default).
▪ -exec allows for the capability to process listed files ( {} expands to current found file ).
▪ Combine matching criteria (predicated) into complex expressions using logical operations -o and -a (default binding) of predicates specified.

Auditing to Maintain a Secure Configuration

Some of the other areas that we want to check when auditing a UNIX system are detailed in this section. The primary goal should be to take what was learned in the preceding sections and create checklists and scripts to help us initiate this process. The main reason for incorporating scripts is that we can quickly rerun any test that we have done in the past and we should expect the same (or at least similar) results each time.
Some of the key objectives in auditing a system should be to identify and maintain a list of the following information that can be tracked and monitored over time:
▪ Identify system type including hardware information and applications.
▪ Identify patch levels invalidate that these are maintained.
▪ General system information should also be collected.

Operating system version

The command, “uname –a” provides processor and operating system information related to the host being audited. Commands such as “patchdiag” are useful in analyzing the current versions of applications and software patching.

File systems in use

The “mount” command displays a list of all currently mounted file systems as well as a list of the types of file systems that are mounted.
The “fdisk –l” command is used to validate the mounted partitions against the actual partitions in use on the system.
The “free” command provides information on how much physical memory is installed in the system, how large the swap partition is, and how much space is currently in use and how much swap space is in use.

Reading Logfiles

As was noted above, UNIX log files can be stored in a variety of different locations and formats. Where possible the aim should be to aggregate and store logs on a remote centralized computer (often called a log server). This host could then be a central location for monitoring many computers on the network. The system could then be firewalled to restrict access and not allow administration from remote sites.
An introduction to logging is freely available in a paper from NIST at http://csrc.nist.gov/nissc/1998/proceedings/paperD1.pdf

What Tools to Use

There are numerous UNIX security tools available and keeping up with changes can be difficult. One of the best sources of information on these tools is available from the “Top 100 Network Security Tools” site maintained by “Fyodor” at http://sectools.org/
NIST also maintains a list of security tools at http://csrc.nist.gov/tools/tools.htm

Password Assessment Tools

Assessing the strength of passwords is an essential task in any UNIX audit. There are a number of tools available to do this including those based on rainbow tables and also dictionary-based versions. The definitive “top 10” list for password crackers is again maintained by “Fyodor”. His site, “Top 10 Password Crackers” (http://sectools.org/crackers.html) maintains a list of password cracker tools and their availability.

Creating your Check List

The most important tool that you can have is an up-to-date checklist for your system. This checklist will help define your scope and the processes that you intend to check and validate. The first step in this process involves identifying a good source of information that can be aligned to your organization's needs. The integration of security checklists and organizational policies with a process of internal accreditation will lead to good security practices and hence effective corporate governance.
The first stage is to identify the objectives associated with the systems that you seek to audit. Once you've done this list of regulations and standards that the organization needs to adhere to may be collated. The secret is not to audit against each standard, but rather to create a series of controls that ensure you have a secure system. By creating a secure system you can virtually guarantee that you will comply with any regulatory framework.
The following sites offer a number of free checklists that are indispensable in the creation of your UNIX audit framework.

CIS (The Center for Internet Security)

CIS provides a large number of Benchmarks for not only UNIX but many other systems (and is consistently mentioned throughout this book). CIS offers both Benchmarks and also a number of tools that may be used to validate a system. The site is www.cisecurity.org
The site has a number of benchmarks and standards for not only generic UNIX'es but also specific types (such as HP-UX, Solaris, and Redhat Linux).

SANS

The SANS Institute has a wealth of information available that will aid in the creation of a checklist as well as many documents that detail how to run the various tools.
The SANS reading room (www.sans.org/reading_room/) has a number of papers that have been made freely available:
▪ GSNA Audit Gold Papers
▪ GCUX UNIX Gold Papers
SANS Score (Security Consensus Operational Readiness Evaluation) is directly associated with CIS.

NSA, NIST and DISA

The US Government (through the NSA, DISA and NIST) has a large number of security configuration guidance papers and Benchmarks.
As shown in Figure 17.5, NIST runs the US “National Vulnerability Database” (see http://nvd.nist.gov/chklst_detail.cfm?config_id=58), which is associated with the UNIX Security Checklist from DISA (http://iase.disa.mil/stigs/checklist).
B9781597492669000175/gr5.jpg is missing
Figure 17.5
The National Vulnerability Database

Considerations in UNIX Auditing

The following list is a quick introduction into some of the things you should be considering when creating a checklist for your UNIX system. This is by no means comprehensive but may be used as a quick framework in association with the standards listed above.

Physical Security

1 Console security
1 Is the system located in a locked room (with a limited number of keys and monitoring)?
2 Is the room being secured so that there is no alternate way into the room (raised floors/ceilings)?
2 Data Security
1 Are backups stored in safe place and offsite data recovery scheme in place?
2 Are all systems protected using a UPS to guarantee stable power?
3 Are network cables secure from exposure?
4 Are cabinets with sensitive information locked?
3 Users practice secure measures
1 Lock screen (or logout) when away from desk.
2 No written passwords.

Network Security

1 Filtering
1 Do not enable services you are not using (inetd/xinetd).
2 Create access control lists (ACLs) to restrict who can connect.
3 Filter out unnecessary services and only allow services you want.
4 Use TCP wrappers for logging.
2 Prevent spoofing
1 Routing
1 Turn off source routing
2 Apply a filter that guarantees that packets coming in from the outside network do not have a source IP address that matches the inside network.
2 Qualified hostnames only in any system file (NFS, hosts.equiv……).
3 No hosts.equiv or .rhosts if possible (cron job remove non-agreed upon ones).
4 .rhost and .netrc files (if allowed), permissions must be 600.
3 Telnet Security
1 Use SSH and get rid of Telnet.
2 Limit telnet to specific IPs or use IPSec if it must be used.
3 Disable the permissions that allow root to login directly (maybe console).
4 NESSUS will uncover many flaws
5 An IDS such as SNORT can monitor for attacks

Account Security

1 Password Security
1 All accounts must have the “passwd” field filled.
2 Only root should have UID 0.
3 Password not guessable (crack on regular basis).
4 Use password aging.
5 Consider using one-time use passwords.
6 No “.rhosts” or “.netrc” files.
7 Accounts should be disabled when there are several bad-logins in a row.
2 Root Accounts
1 Only allow the super-user log directly at the console (/etc/securetty).
2 Check root dot files; never have “.” in the path.
3 Limit the number of users on a system.
4 Use a strong passwd.
5 Always logout of root shells; never leave root shells unattended (also logging out of normal shells is recommended).
6 Change root passwd every 60 days and whenever someone leaves company.
7 Login as a generic user and use “su” (or use SUDO).
8 Strong umasks (077 if possible).
9 Always use full path in commands.
10 Never permit non-root write access to any directories in root's path.
11 Try not to create tmp files in publically writable directories.
3 Guest Accounts
1 Limited time, include account expiration.
2 Use non-standard account names (not guest).
3 Use strong passwd.
4 Use a restricted shell.
5 Strong umasks (077).
4 User Accounts
1 Remove accounts on termination.
2 Accounts should NEVER be shared.
3 Disable login for well known accounts (bin,sys,uucp).
4 Strong umasks (077 is best).
5 Use a restricted shell if possible.

File System Security

1 Device Security
1 Device files /dev/null, /dev/tty & /dev/console should be world writeable but not executable.
2 Most other device files should be un-readable and un-writeable by standard users.
2 Script Security
1 Never write setuid/setgid shell scripts and write “C” programs instead.
2 Scripts should always use full pathnames.
3 Minimize writable filesystems.
4 Use setuid/setgid files only where absolutely necessary.
5 Ensure that important files are only accessible by authorized personnel.

Security Testing

1 All the latest security OS patches installed.
2 Test using NESSUS (network security).
3 Test using TIGER (for methods that root may be compromised).
4 Test using CRACK (a password strength checker).
5 Integrity Checks (AIDES or Tripwire to detect changes to files).
6 Frequently check btmp, wtmp, syslog, sulog , and so on.
7 Set up automatic email or other alerting to notify system administrators of any suspicious activities.

Notes

1 From the original Tiger README file.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.199.175