How it works...

Starting with the imports, we bring in the Sleuth Kit utilities and pylnk library. We also bring in libraries for argument parsing, writing the CSV reports, and StringIO to read the Sleuth Kit objects as files:

from __future__ import print_function
from argparse import ArgumentParser
import csv
import StringIO

from utility.pytskutil import TSKUtil
import pylnk

This recipe's command-line handler takes three positional arguments, EVIDENCE_FILE, IMAGE_TYPE, and CSV_REPORT, which represent the path to the evidence file, the type of evidence file, and the desired output path to the CSV report, respectively. These three arguments are passed to the main() function.

if __name__ == '__main__':
parser = argparse.ArgumentParser(
description=__description__,
epilog="Developed by {} on {}".format(
", ".join(__authors__), __date__)
)
parser.add_argument('EVIDENCE_FILE', help="Path to evidence file")
parser.add_argument('IMAGE_TYPE', help="Evidence file format",
choices=('ewf', 'raw'))
parser.add_argument('CSV_REPORT', help="Path to CSV report")
args = parser.parse_args()
main(args.EVIDENCE_FILE, args.IMAGE_TYPE, args.CSV_REPORT)

The main() function begins with creating the TSKUtil object used to interpret the evidence file and iterate through the filesystem to find files ending in lnk. If there are not any lnk files found on the system, the script alerts the user and exits. Otherwise, we specify columns representing the data attributes we want to store for each of the lnk files. While there are other attributes available, these are some of the more relevant ones we extract in this recipe:

def main(evidence, image_type, report):
tsk_util = TSKUtil(evidence, image_type)
lnk_files = tsk_util.recurse_files("lnk", path="/", logic="endswith")
if lnk_files is None:
print("No lnk files found")
exit(0)

columns = [
'command_line_arguments', 'description', 'drive_serial_number',
'drive_type', 'file_access_time', 'file_attribute_flags',
'file_creation_time', 'file_modification_time', 'file_size',
'environmental_variables_location', 'volume_label',
'machine_identifier', 'local_path', 'network_path',
'relative_path', 'working_directory'
]

Next, we iterate through the discovered lnk files, opening each as a file using the open_file_as_lnk() function. The returned object is an instance of the pylnk library, ready for us to read the attributes from. We initialize the attribute dictionary with the file's name and path and then iterate through the columns we specified in the main() function. For each of the columns, we try to read the specified attribute value, and, if we are unable to, store an "N/A" value otherwise. These attributes are stored in the lnk_data dictionary which is appended to the parsed_lnks list once all attributes are extracted. After this process completes for each lnk file, we pass this list, along with the output path, and column names, to the write_csv() method.

    parsed_lnks = []
for entry in lnk_files:
lnk = open_file_as_lnk(entry[2])
lnk_data = {'lnk_path': entry[1], 'lnk_name': entry[0]}
for col in columns:
lnk_data[col] = getattr(lnk, col, "N/A")
lnk.close()
parsed_lnks.append(lnk_data)

write_csv(report, columns + ['lnk_path', 'lnk_name'], parsed_lnks)

To open our pytsk file object as a pylink object, we use the open_file_as_lnk() function which operates like other similarly named functions throughout this chapter. This function reads the entire file, using the read_random() method and file size property, into a StringIO buffer that is then passed into a pylnk file object. Reading in this manner allows us to read the data as a file without needing to cache it to the disk. Once we have loaded the file into our lnk object, we return it to the main() function:

def open_file_as_lnk(lnk_file):
file_size = lnk_file.info.meta.size
file_content = lnk_file.read_random(0, file_size)
file_like_obj = StringIO.StringIO(file_content)
lnk = pylnk.file()
lnk.open_file_object(file_like_obj)
return lnk

The last function is the common CSV writer, which uses the csv.DictWriter class to iterate through the data structure and write the relevant fields to a spreadsheet. The order of the columns list defined in the main() function determines their order here as the fieldnames argument. One could change that order, if desired, to modify the order in which they are displayed in the resulting spreadsheet.

def write_csv(outfile, fieldnames, data):
with open(outfile, 'wb') as open_outfile:
csvfile = csv.DictWriter(open_outfile, fieldnames)
csvfile.writeheader()
csvfile.writerows(data)

After running the script, we can view the results in a single CSV report as seen in the following two screenshots. Since there are many visible columns, we have elected to display only a few for the readability sake:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.135.63