Chapter 12. Forensics

 

“Only strong characters can resist the temptation of superficial analysis.”

 
 --Albert Einstein

Sometimes logging isn’t enough; it can fail, or it can be incomplete, or it can be compromised. Sometimes it is simply too late by the time someone reads the log. And other times bad things just happen. That is where forensics comes in, giving users the capability to take snapshots of the forest before the tree falls, as well as allowing them to search the underbrush for fallen trees.

In this chapter we give you an overview of forensics and show how some open source tools can be used to monitor filesystem integrity and the options available for analyzing hard disk data in a postmortem situation.

An Overview of Computer Forensics

Computer forensics is a process that includes isolating, acquiring, preserving, documenting, and analyzing computer data for use as evidence. However, the analysis of a system can be conducted for various reasons with different goals in mind; the following are some examples:

  • Investigation of a computer-related crime, often to collect needed legal evidence.

  • Damage assessment after a break-in or exploit.

  • Analysis of the specifics of system weakness to prevent future break-ins.

  • Data recovery or evidence gathering from a corporate asset used by a naughty employee.

As computers continue to become more entangled in our daily activities, and more of our information is stored in bits than on paper, it should be no surprise that crimes involving computers will also increase over time. Computers are used as vehicles for crime and are often themselves the targets of crime.

Computer forensics encompasses more than just the actual process of analyzing any acquired data. There is a great deal of emphasis on, and concern over, the issues involved with each step of the process, from the initial acquisition to the final documentation and findings report. Although the circumstances surrounding an analysis will vary, with potentially different goals in mind, there are some core aspects common to any forensic analysis of computer data; digital evidence is first acquired, and then analyzed.

Acquisition

Computer forensics begins with discovery, realization, and a seizure of some kind. However, the initial set of actions taken in a forensic analysis case will vary depending on the persons involved, any defined incident response procedures, and usually on any learned facts leading up to the discovery. The decisions made at the onset of any incident can have a dramatic effect on the final results of a forensics examination.

First, prepare to document as much as possible. Maintain a log that includes details about where the data is stored, for what periods of time, and who has access to it. Such a log is known as part of the chain of custody. If the results of the analysis find their way into a legal setting, complete documentation in this manner will go a long way toward providing solid and convincing evidence for a case. Take note of everything about the environment, as trivial as it may seem. At the time, it may seem silly to document certain elements of the environment (such as the condition of the exterior case) but the goal is not to write flowing prose, it is to take multiple snapshots of the evidence or crime scene; part of those snapshots includes detailed descriptions of the environment. Who is to say that any one piece of information is more important than another before any real analysis is performed? Even the most experienced of examiners cannot possibly know what relevance the analysis may suggest concerning collected information. Keep an open mind at all times.

The main goal of the acquisition process is to capture and preserve the entire state of the system and to avoid any loss or tampering of data. This includes all contents of the hard disk(s) as well as the resident memory, state of the running kernel, network, application processes, and various aspects of the operating system. However, this is easier said than done if the system in question is up and running and providing a critical service.

Whether or not to shut down a system, and how, is often a topic of debate. Should the system be powered down? Should it be removed from the network? Obviously, if the system in question is a threat to other machines on the network, it should be isolated. However, sometimes the extent of any threat is not known. In the event that the system needs to be shut down, halt the operating system in the normal fashion but do not allow it to reboot; there may be self-destruct features in place designed to cover an intruder’s tracks. The immediate loss of power stops the operating system in its tracks; however, it also can cause damage to the hardware, result in the loss of useful data, and possibly destroy any evidence residing in the running state of the system. Evidence exists in many forms and the wrong decision here can potentially lead to the loss of critical evidence. Essentially, there is no right answer to this problem; it boils down to one’s best judgment based on the facts at hand, and the circumstances of the situation. In any case, document everything; you can always discard irrelevant information later.

The act of acquiring data usually involves a byte-by-byte duplication of any hard disks. There are commercial products today that provide write protection for the disk and allow an examiner to extract and archive the data to a more permanent storage medium. Subsequent copies should be made from the first copy so that the original data is handled as little as possible. The original drive as well as any archive(s) should be labeled and then placed in a secure location. The original data as well as the official extraction archive should never be given out for analysis. Analysis should only be done on copies of the data.

Analysis

In the world of computer forensics, the analysis and recovery is often the most exciting. There are various methods for analyzing a hard disk image, and any analyses conducted may differ depending on the goal. The goal may be to recover deleted files, to gather evidence against a person, or to find out how a compromised host was attacked and what can be done to prevent it in the future. In any case, there are some tools and procedures that are common to most kinds of forensic analysis.

Never perform analysis on the original data. As mentioned earlier, the original data should be copied once, and then stored in a secure location. All subsequent analysis of the data should then be performed on copies so that there is no risk of damage to the original disk. After the original data has been altered, it can be difficult, if not impossible, to establish any credibility in any of the findings.

Do not make assumptions, and keep an open mind. Until all the facts have been gathered, do not let assumptions affect your analysis procedure. For example, if a host is compromised, there may be evidence as to the nature of the attack or what was modified or attempted. Although log files can be tampered with to remove incriminating evidence, the intruder may not have been intelligent enough to do so. This isn’t to say that a strict procedure should always be followed; that can also lead to an incomplete analysis. Keep an open mind and make decisions based on the facts.

Often an analysis will entail looking at the file structure. To be successful, an examiner should know the filesystem layout and be able to recognize something that looks out of place, or wrong. This includes file traits such as location, size, timestamps, and permissions. It may help to keep a complete filesystem listing for easier analysis and searching.

All aspects of a disk can be scrutinized, including any unallocated portions of the disk, unused data, and even swap space. Files that are deleted may still be recovered, and there are tools specifically designed to resurrect files that were deleted by the operating system.

When crimes involve computers, there is evidence left behind; sometimes this evidence is not so obvious. Computer forensics is all about finding that evidence. This entails acquiring seized data without modifying it, and then analyzing it by looking for specific files, revealing deleted files, analyzing logs, and basically performing a structured analysis of a system to determine the facts surrounding its involvement in a crime. The remainder of this chapter is devoted to practical information related to the forensic analysis of Mac OS X systems. This includes setting up Osiris to maintain periodic snapshots of the filesystem, using TASK to analyze a suspect host, and a list of important things to check for during any analysis of Mac OS X.

Osiris

Osiris is a file-integrity management application that works quite well on Mac OS X. In a nutshell, Osiris can be used to periodically monitor the attributes of files so that an administrator can stay alert to changes that may indicate a break-in or an abuse of the system. Basically, the way it works is that Osiris is used to scan the filesystem on a regular basis. With each scan, the collected file attributes are stored in a database. With each new database, an application named scale is used to perform a comparison between the new database and the last trusted database. scale produces a flat-file report based on the differences between the two databases.

Osiris originally started out as a set of Perl scripts written by Preston Norvell. Eventually this project evolved into a more complete and configurable application (written in C) for monitoring the integrity of UNIX systems, including Mac OS X. However, the current release of Osiris is restricted in that it only supports a usage model to monitor a single host. Despite this limitation, Osiris can still be useful for monitoring changes to a system.

Although there are other solutions for checking file integrity, Osiris is the only open source solution that truly supports Mac OS X. Osiris includes sample configuration files for Mac OS X and Mac OS X Server and is HFS-aware with the capability to scan and compare file resource forks.

In this section, we will discuss some general security considerations concerning monitoring file integrity and go over how to set up and use Osiris on Mac OS X to monitor the integrity of a single host.

General Security Considerations

An Osiris database is just a collection of glorified file stat structures. With each scan, a new database is created that is a snapshot of the filesystem. Forensic examination can certainly benefit from having a collection of such snapshots. It is common for an examiner to look at the three timestamps associated with a file: the last modification time (mtime), the last access time (atime), and the last file status change (ctime). Analysis of these attributes can reveal a trail of activity. Having a collection of Osiris databases is kind of like having a hidden camera that periodically takes a snapshot of the environment. Having the last timestamp values for files is good, but having a timeline of these values is even better.

Another thing to consider is that because all the databases are stored on the monitored host, they run the risk of being tampered with, along with any generated comparison reports. Ideally, the data collected during a scan would never be written to disk, but securely transported somewhere else for analysis. Developing versions of Osiris support this concept of remote storage for databases; the current release does not. However, Osiris does have a MySQL module for storing scan data in a database, even a database on a remote host. Finally, an application such as Osiris does not possess any real-time monitoring capabilities. By real-time, we mean being able to respond to detected changes in less than a second. Because it usually takes more than a second to perform a scan and perform comparisons, any real-time monitoring is impossible and would require a more low-level integration with the operating system. Does this make applications such as Osiris useless? Not at all! Even scans as far apart as 24 hours can be valuable and may provide the first indication of an intrusion. Consider the following analogy: Suppose a bank installs perimeter security in the form of motion and heat-detection sensors. Additional precautions include locks on the doors, audible and silent alarms, video cameras strategically placed to capture every square inch of the interior, and multiple locked passageways leading to a vault protected by inches of steel and a sophisticated lock of some kind. No one element in this example serves to prevent every kind of penetration; they work to collectively provide a blanket of security for the valuables housed in the bank. Ideally, the installed perimeter security would thwart or detect any intrusion attempts but eventually a set of circumstances will exist in which it will fail. There are cases where firewalls and network intrusion systems can be compromised or ineffective. These systems are generally given more attention because the main goal is to keep the bad guys out altogether. When they do get in, we need to know about it and, if possible, what was compromised.

The bottom line is that every detection or prevention tool involved in maintaining the security of a host is important. The key to effectively managing host security starts with an understanding of how to apply the strengths of any used tools and how to fill in the gaps between their weaknesses. Each has its fair share of strengths and weaknesses; it is up to the administrator to recognize them and act accordingly.

Ideally, a host should be scanned and a database of the system created before it is ever placed on a network. This initial database should be burned onto read-only media and placed in secure storage. We realize that this is not always possible and often machines are already deployed. These machines can still benefit from the use of Osiris, but keep in mind that the reports only disclose information on the differences between previous scans, not any outside source; the key is to start with a trusted host.

Installing Osiris

This section discusses how to install and configure Osiris to monitor the integrity of a small set of system-critical files and applications on Mac OS X. This is a lightweight configuration that will require little maintenance and provide a reliable system for managing the security of any Mac OS X Client or Mac OS X Server installation.

The source for Osiris can be freely downloaded from http://osiris.shmoo.com. At the time of this writing, the current version is 1.5.2. Download and verify the distribution package with the following commands:

bash-2.05a$ curl -O osiris.shmoo.com/data/osiris-1.5.2.tar.gz

bash-2.05a$ openssl sha1 osiris-1.5.2.tar.gz
SHA1(osiris-1.5.2.tar.gz)= b9bd841934f23fc544fc8cbb745cb79fd07c89bb

bash-2.05a$ curl https://www.knowngoods.org/search.php?release= osiris-1.5.2.tar.gz
Installing Osiris&item=sha1
b9bd841934f23fc544fc8cbb745cb79fd07c89bb

Now configure and build Osiris with the default settings. This installs the binaries in /usr/local/bin/ and configures Osiris to store scan information in local database files as opposed to a MySQL database.

bash-2.05a$./configure
bash-2.05a$ make
bash-2.05a$ make install

Next, we need to set up a repository where the configuration files, databases, and logs will be stored. A good place for this on Mac OS X is /var/db/ directory. Create an Osiris directory tree here and set some appropriate permissions with the following commands:

bash-2.05a$ sudo mkdir /var/db/osiris
bash-2.05a$ sudo mkdir /var/db/osiris/configs
bash-2.05a$ sudo mkdir /var/db/osiris/logs
bash-2.05a$ sudo chown -R root:admin /var/db/osiris
bash-2.05a$ sudo chmod -R 0750 /var/db/osiris

This ensures that only root has write access to the directory, while admin users can still read and compare the database files. If it bothers you to store the configuration or log files here, they can just as easily be placed somewhere else such as under the /etc directory.

Configuring and Automating Osiris

With Osiris installed, we can now configure and automate it to periodically perform a basic integrity scan. Osiris comes with a variety of built-in filters to allow only files matching specific attributes to be included in the database. However, for our purposes here, we are only going to scan the built-in system applications found in /bin, /sbin, /usr/bin, and /usr/sbin directories.

An Osiris configuration file consists of a sequence of directives and directory blocks. Each directive is either global or local to a specific block. If no directive is specified in a block, the global is used. Each directory block then consists of a list of rules that determine which files from that directory are logged to the database. If no rule matches, the file is ignored. When no rules are specified, the global rule is assumed. For a list of valid rules and syntax information, refer to the online documentation at http://osiris.shmoo.com/docs.

Under the configs directory, create a file named daily.conf with the following contents:

Database   /var/db/osiris/daily.osi
Verbose    no
ShowErrors  no
Prompt   no
Recursive  yes
FollowLinks no

IncludeAll perm,mtime,ctime,inode,links,uid,gid,bytes,blocks,flags
Hash md5

<Directory  /bin>
</Directory>

<Directory  /sbin>
</Directory>

<Directory  /usr/bin>
</Directory>

<Directory  /usr/sbin>
</Directory>

The database directive specifies the path to store the scanned file data. The verbose and showerrors directives are turned off so that no output related to scanning progress or error messages are printed. The recursive directive sets the default scanning rule so that a recursive scan is conducted on each directory block, unless otherwise specified. Finally, the followlinks directive is turned off so that symbolic links are not traversed. The IncludeAll directive sets the default list of file attributes to monitor; in this example we monitor all except the last accessed attribute (atime). The hash directive specifies which checksum algorithm to use and must be one of md5, sha, haval, or ripemd.

This configuration will perform a scan of all system applications and should not take more than a couple of seconds to run. Each directory block is empty and thus will use the global IncludeAll rule specified at the top of the configuration.

Next, we need to add some lines to the periodic script so that osiris and scale are automatically run each day. Edit the file /etc/periodic/daily/500.daily and add the following lines to the bottom of the file:

# run osiris and compare against trusted database.

DATE='date +'%m-%d-%Y''

CONFIG=/var/db/osiris/configs/daily.conf
LOG=/var/db/osiris/logs/${DATE}.log

BASE_DB=/var/db/osiris/base.osi
DAILY_DB=/var/db/osiris/${DATE}.osi

/usr/local/bin/osiris -f ${CONFIG} -o ${DAILY_DB}
/usr/local/bin/scale -n -q -l ${BASE_DB} -r ${DAILY_DB} -o ${LOG}

The current release of Osiris does not have any built-in notification system. However, an application such as swatch can be run on the generated report file to look for specific files or attributes (see Chapter 11, “Auditing,” for information about using swatch to monitor files). A more basic solution is to have email sent whenever the number of files that changed is greater than zero. This can be accomplished by appending the following additional lines to the daily periodic script:

DELTAS='grep "records that differ" ${LOG} | awk '{print $4}''

if test ${DELTAS} -ne 0; then
  mail [email protected] -s "osiris report - ${LOG}" < ${LOG}
fi

With these additions to the periodic script, Osiris will run (as root) each day and will generate a new database, compare that database against the trusted database, and generate a report in /var/db/osiris/logs/ with the date as the filename. If any files have changed, the entire log will be mailed to the email address specified.

Finally, we need to prime this system by creating the initial database used for all the comparisons. This can be done with the following command:

bash-2.05a$ sudo osiris -f /var/db/osiris/configs/daily.conf -o /var/ db/osiris/base.osi

The generated databases will be roughly 100K and will eventually take up disk space. One option is to not save the databases with unique names. The periodic script can be adjusted to write the daily scan to the same filename each day. Likewise, the log files will also build up and can be rotated in a similar fashion as the system logs.

All database files are created with permissions of 0600 and all scale report files are created with permissions of 0400.

When a legitimate change occurs, such as a software update, the scale report may contain many entries for changed file attributes. The reported entries can be reduced by changing the periodic script to use a more current database file as the trusted database; don’t forget to review the changes before assuming a newer database is to be trusted.

Using Osiris to Monitor SUID Files

Along with sample configurations included with Osiris is a configuration file that can be used to monitor SUID applications. Maintaining a separate database specifically for SUID and SGID applications is probably not a bad idea. The file is named suid.conf and can be found in the /configs/ directory of the Osiris source tree.

This sample configuration file can be used to determine any added, removed, or altered SUID or SGID files on the entire disk. However, such an intensive scan can take a few minutes and almost approaches the annoyance of a virus scanner.

Using scale

The scale application is used primarily to compare database files, but there are some features of scale that can make it a handy tool for forensic examiners.

As a simple example, to compare the two databases cain.osi and able.osi, do the following:

bash-2.05a$ scale -l cain.osi -r able.osi -o scale.log

The output file is optional; results will be printed to standard output if no output file is specified. Ideally, scale will produce a report file stating zero changes like the one shown here:

osiris database comparison
Sat Feb 8 19:12:45 2003

[ database:  /var/db/osiris/base.osi ]

 records:      809
 source:      config file

 created on:    Sat Feb 8 18:29:46 2003
 created by:    root
 created with:   osiris 1.5.2

[ database:  /var/db/osiris/02-08-2003.osi ]

 records:     809
 source:     config file

 created on:  Sat Feb 8 19:12:45 2003
 created by:   root
 created with:  osiris 1.5.2

[ file differences ]

[ new files (0) ]
[ missing files (0) ]

records compared:  809
records that differ: 0
new records:     0
missing records:   0

scale also can be used to print the entire contents of the database. To see an ls style listing, use the -p option. To see every attribute of every file in the database, use the -x option. The following example prints all attributes for the file /bin/ls found in the database named base.osi:

bash-2.05a$ scale -x base.osi | grep "(/bin/ls)" -A 16
path: (/bin/ls)
checksum: md5(2f6c62439d1a56b27a52a2478c4ecbf6)
user: root
group: wheel
device: 234881033
inode: 123111
permissions: -r-xr-xr-x
links: 1
uid: 0
gid: 0
mtime: Jan 30, 2003 10:18
atime: Jan 30, 2003 10:18
ctime: Jan 30, 2003 10:18
device_type: 0
bytes: 27668
blocks: 56
block_size: 4096

Specific attributes can be isolated with further grep statements. For example, to see the last modification time for that same file:

bash-2.05a$ scale -x scan.osi | grep "(/bin/ls)" -A 16 | grep mtime mtime: Jan 30, 2003 10
Using scale:18

Forensic Analysis with TASK

Often a forensic examination of a system is contracted out to specialists who charge thousands of dollars; and some situations may warrant such an expense. Likewise, you can collect an arsenal of expensive hardware and software to put in your forensic toolkit, but this is not always necessary. There are open source forensic tools that can be considered quite effective. The most common of these open source tools is TASK, developed by @stake (http://www.atstake.com). However, TASK does not have support for HFS+, the default filesystem on Mac OS X. Thus, the reality is that any hard-core analysis of a Mac OS X system demands the use of commercial software or services; something we will not cover in this book. However, we will not leave you empty handed.

In this section, we will provide information about what elements of TASK can be used on Mac OS X, and other issues involved in a postmortem situation on Mac OS X.

Overview of TASK

TASK (The @stake Sleuth Kit) is basically a filesystem analysis toolkit that contains applications that allow for the analysis of many different types of filesystems for both UNIX and Windows. In the event that a live analysis is necessary, TASK can also be used to conduct certain types of analysis on a running system.

Collectively, the TASK command line tools work off hard disk data collected with the common UNIX dd command. Some of the most prominent features include:

  • View deleted files.

  • Reveal detailed information about files and the structure of the filesystem.

  • File checksum verification tools.

  • File sorting based on type and extension.

  • Timeline creation based on file activity of modified, access, and change timestamps that are associated with every file.

One of the more useful TASK applications that can be used on Mac OS X is mactime, used to construct a timeline of file activity. mactime is discussed later in this chapter.

Getting the Data

Before we start, it is recommended that a machine be used solely for the purpose of creating the disk image, or copy of the target filesystem. We will refer to such a machine as the “imaging” system. Obviously, this system needs to be trusted and should not be connected to a network. The imaging system will be used to create a checksum of the original filesystem, do a byte-for-byte copy of the disk, and then verify the copy with another checksum. The original disk should then be locked away and any further copies should be made from this master copy.

Another thing to consider is how to write-protect any disks before they are ever connected to the imaging system. There are commercial products available that provide write protection for disks. Sometimes the drives themselves have the capability to disable write operations.

The copied image may be quite large. There are many hard disk vendors that now make external FireWire drives with capacities greater than 120GB, which should be sufficient for storing a disk image. Before copying data, the disk that will be used to store the copy should be cleared. This can easily be accomplished with the dd command. For example, assuming the disk used to store the copied disk is located at /dev/disk0s10, use the following command to wipe any pre-existing data:

dd if=/dev/zero of=/dev/disk0s10

Finally, we would like to boot the imaging system such that the target drive is not mounted, only discovered. An easy way to do this is to boot into single user mode. This also ensures that the system doesn’t find its way onto a network.

To boot into single user mode, boot while holding down the Command-s keyboard sequence. Once in single user mode, check and mount the root filesystem with the following commands:

/sbin/fsck -y
/sbin/mount -uw /

Next, start the automounter with this command:

/sbin/autodiskmount -v

If not already known, locate the discovered target disk from the output of the automount daemon. The diskutil application also can be helpful in locating this. The following is an example of using diskutil to look at the main disk:

bash-2.05a$ diskutil list disk0
/dev/disk0
  #:       type name         size   identifier
  0: Apple_partition_scheme           *14.1 GB disk0
  1:  Apple_partition_map           31.5 KB disk0s1
  2:     Apple_Driver43           27.0 KB disk0s2
  3:     Apple_Driver43           37.0 KB disk0s3
  4:    Apple_Driver_ATA           27.0 KB disk0s4
  5:    Apple_Driver_ATA           37.0 KB disk0s5
  6:     Apple_FWDriver           100.0 KB disk0s6
  7:    Apple_Driver_IOKit          256.0 KB disk0s7
  8:     Apple_Patches           256.0 KB disk0s8
  9:       Apple_HFS OS X (10.2)     14.1 GB disk0s9

Assuming the disk is disk2, first compute a MD5 checksum of the target disk and save it to a file so it is not easily lost.

openssl md5 /dev/disk2 > original.md5
cat original.md5
MD5(/dev/disk2)= 81ff7a34cbecb764a211582608073172

Next, copy everything from the target disk onto the repository using the dd command. For example, if your repository was /Volumes/extfw, you would use the following command:

dd if=/dev/disk2 of=/Volumes/extfw/seizure-master.img

Go and get a cup of coffee, this will probably take a long time to copy depending on the size of the disk.

Finally, compute the MD5 of the copy and verify that it is the same as the original determined earlier:

openssl md5 /Volumes/extfw/seizure-master.img
MD5(/Volumes/extfw/seizure-master.img)= 81ff7a34cbecb764a211582608073172

Now we are ready to analyze the copied data. Do not forget to immediately shut down the imaging system, label the target drive, and lock it up somewhere safe. All copies for analysis should now be made from the image that was just created.

Analysis with TASK

First, acquire a copy of the target image to be analyzed. Do not forget to verify the MD5 sum against the original that was initially taken from the actual disk. To prevent the image file from being modified, change the permissions to read-only. This can be done through the GUI with GetInfo, or on the command line as follows:

bash-2.05a$ chmod 0440 seizure-copy1.img
bash-2.05a$ chflags uchg seizure-copy1.img

Next, mount the image with Disk Copy.app by clicking the image, or using the following on the command line:

bash-2.05a$ hdiutil mount seizure-copy1.img

Before we begin any analysis, we need to first download and install TASK. TASK can be downloaded from the @stake web site http://www.atstake.com/research/tools/task/. At the time of this writing, the current version of TASK is 1.60. The included Makefile works just fine on Mac OS X.

bash-2.05a$ tar xvfz task-1.60.tar.gz
bash-2.05a$ cd task-1.60
bash-2.05a$ make

This places all the applications in the bin/ directory. However, most of the TASK applications operate directly on the disk image and thus require foreknowledge of the filesystem. As mentioned earlier, HFS is not on the list of currently supported filesystem types but there are some tools that can still be used.

Analyzing the Filesystem

A lot can be learned about a system by just browsing the filesystem. Because there are not a lot of nifty open source tools available for examining Mac OS X systems, we will try to provide more than a few valuable tips about what to look for in a potentially compromised host.

First, and foremost, look at log files. It may not be necessary to put on your detective hat when you have a blatantly obvious sequence of events revealed by entries in a log file. Chapter 11 contains a listing of log files and their locations for Mac OS X and Mac OS X Server. Log files can be tampered with though, so do not be quick to jump to conclusions.

NetInfo is the default store for users and groups on Mac OS X. If a host has been compromised, an attacker will sometimes create new accounts, possibly a duplicate root account. The NetInfo database file is located in the /var/db/netinfo/ directory. The default database is local.nidb. The command line tool nicl can be useful in examining the contents of this database. For example, to print the users and their UID values from a NetInfo database on /Volumes/seizure-copy1/, use the following command:

nicl -raw /Volumes/seizure-copy1/var/db/netinfo/local.nidb -list /users

To see the list of groups and the GID values, use a similar command:

nicl -raw /Volumes/seizure-copy1/var/db/netinfo/local.nidb -list /groups

Things to look for include unrecognized groups or users, duplicate UID or GID values, and non-root users with UID of zero. Here is a useful way to search specifically for users with a UID of zero:

nicl -raw /var/db/netinfo/local.nidb -search / 0 -1 uid 0

Other things that can easily be extracted from the NetInfo database file include the trusted_networks setting at the root of the domain. By default, this is an empty list. To view the value of this field, use the following command:

nicl -raw /var/db/netinfo/local.nidb -read / trusted_networks

Likewise, the list of machines entries in the database can be viewed with the following command:

nicl -raw /var/db/netinfo/local.nidb -list /machines

Although the nicl command line application is quite clunky, it can be useful as demonstrated here. For more information about nicl, reference the man page.

Like most UNIX systems, there are typical configuration files under the /etc/ directory. The /etc/hostconfig file contains system configuration settings. The /etc/hosts.allow and /etc/hosts.deny files contain settings for TCP wrappers. The /etc/xinetd/ files should be audited as well. Another prime target are crontab files, and the periodic scripts that get run daily. The root crontab file is located in /etc/crontab. On Mac OS X, this file starts out with only three entries to run the daily, weekly, and monthly periodic scripts.

Logs for crashed applications are stored in /Library/Logs/CrashReporter/ directory. Unless crash dumps are turned off in /etc/hostconfig, every time an application crashes a detailed log is created.

The version of the operating system, including the build version, is located in the file, /System/Library/CoreServices/SystemVersion.plist (keep in mind that this information can easily be modified).

Startup items are located in /System/Library/StartupItems and are started by the SystemStarter process on boot. It is a good idea to check these and make sure that there is nothing out of the ordinary here. Specifically, audit each script file to be sure nothing sneaky is going on. Each startup item has a directory and a shell script of the same name as that directory.

The time zone setting is designated by the symbolic link /etc/localtime, which points to a time zone file in /usr/share/zoneinfo/.

Another useful thing to do is to perform a SUID and SGID audit of the system to look for any such applications that are out of the ordinary. This can be done easily enough with the find command, as follows:

find /Volumes/seizure-copy1 -type f ( -perm -02000 -or -perm -04000 )

Appendix A, “SUID and SGID Files,” lists the installed SUID and SGID files on Mac OS X and Mac OS X Server.

Although the filesystem does not show everything about the state of the system when it was operational, an examiner can piece together a lot of what was going on. This could include the nature of an attack, what was exploited, and maybe any backdoors that were put in place. An understanding of the organization of the filesystem, and what would be considered out of the ordinary, is invaluable.

Looking at Timestamps

The mactime application that is part of TASK can organize the files in the image according to their modification time. Because mactime relies on TASK applications that do not understand HFS, we need to use an additional timeline application based off of an application from The Coroners Toolkit (which TASK is an extension of). This application is mac-robber and can be downloaded from the @stake site http://www.atstake.com/research/tools/mac-robber-1.00.tar.gz.

Unpack mac-robber and build it as follows:

bash-2.05a$ tar xvfz mac-robber-1.00.tar.gz
bash-2.05a$ cd mac-robber-1.00
bash-2.05a$ make GCC_OPT="-O2 -Wall"

Next, run the mac-robber tool on the target image to collect the timestamps:

bash-2.05a$./mac-robber /Volumes/seizure-copy1/ > seizure-copy1.mac

This produces a text file that the TASK application, mactime, can understand and use to print a chronological file listing. Next, provide this file as an input to mactime. By default, the timeline is printed to standard output so it is a good idea to redirect it to a file. It is recommended that the appropriate time zone be specified with the -z option. This can make understanding the analysis and correlation with other data less painful.

bash-2.05a$ mactime -z MST7MDT -b seizure-copy1.mac > seizure-copy1.timeline

If the UID and GID values are bothersome, there are options to specify the group and password files to be used so it can recognize UID and GID values. These must be extracted from the NetInfo domain.

The output of the mactime application is a listing of files organized by their timestamps. That is, the first entry is the oldest timestamp; the last entry is the most recent timestamp. The timestamps include the last modification time (mtime), last access time (atime), and the last change time (ctime). Thus, it is possible for a file to appear three times in this list if all three of its timestamps are different. Seeing the files organized in this fashion can sometimes reveal what operations were performed on the system. It also can be used and correlated with known events. For example, if you know the web server stopped working at a specific time, the timeline in this file will reveal what parts of the filesystem were being operated on.

Here is a snippet of a sample timeline generated with mactime. At the top is the timestamp value. Each of the five files in this example share the same access timestamp (atime). Each entry consists of seven columns of information about the file. The first column is the size (in bytes) of the file. The second column denotes the timestamp; in this case all are last-access time values as a result of the daily periodic scripts being run. The third column is the file permissions. The fourth and fifth columns are the UID and GID values, respectively. The sixth column is the inode number for the file and the last column is the file path.

Sun Feb 09 2003 03:15:08

3094 .a. -r—r—r— 0  0 28137  /etc/defaults/periodic.conf
724 .a. -rw-r—r— 0  0 7878  /etc/syslog.conf
1389 .a. -r-xr-xr-x 0  0 28138  /etc/periodic/daily/100.clean-logs
0  .a. -rw-r—r— 0  0 155923  /etc/dumpdates
337 .a. -rw-r—r— 0  0 27062  /etc/networks

Summary

One good way to prevent a host from being compromised is to completely disassemble it and bury the parts in your backyard. In most cases this is not an option and it is better to be prepared for the worst, just in case. In this chapter we saw some less-than-glamorous ways to examine a host for intrusions or abuse. Applications like Osiris can be used to monitor the integrity of files. Beyond that, there are more intrusive methods to examining a system for evidence that may indicate what was compromised and how. Although the forensic toolkit TASK does not fully support Mac OS X, portions of it can still be used to examine a Mac OS X disk image.

This chapter discussed practical ways to detect a potentially compromised host, as well as methods and tools that can be used to look for damage. In the next chapter we will discuss more general issues related to incident response.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.196.175