Chapter 47. Determining Binary File Differences

 

A complex system that works is invariably found to have evolved from a simple system that worked... . A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.

 
 --John Gall, from Systemantics: How Systems Really Work and How They Fail

In the wonderful world of software development and deployment, products are generally never released error-free. Even in the rare instance that a product is rolled out without any internal bugs, issues appear from different hardware and operating system configurations on the end user’s computer. Whether the issues are related to security vulnerabilities, application instability, or even a feature that was not feasible to implement in an earlier version, software generally requires at least a couple of revisions to keep the people using the software happy and interested in the product. This chapter will discuss one particular method of updating older versions in a manner suitable for environments requiring a small memory or hard drive footprint.

Deployment is a very important topic to address in software development. If a product you have deployed requires an important update, how will the existing people using your software receive it? The most popular medium for transferring data is the Internet, and most auto-update engines utilize it to send new versions to users. One problem with the Internet is that with such a wide demographic of users, not everyone has a great connection speed and transfer rate.

Imagine you have a 30 MB data file included with an initial release of your product, and in the next update 200 KB is modified. Would it be feasible to have everyone download the new data file? Such a large file would take a long time to download on a dial-up connection, especially when only a small fraction of the original data was modified. A better method would be to transmit only the bytes that changed, and merge them into the existing data file.

An ideal solution would be a utility that could take an old and a new version of a file, generate a list of differences, and offer a method to both apply and deploy the data changes to users.

To start, we need an algorithm that can determine a list of differences between two arbitrary sets of data that are sequentially similar, and do so in a manner that can transform the old dataset as efficiently as possible.

Note

The large segments of source code are available on the Companion Web site. Please refer to the source code of the example in order to clarify any implementation questions.

What is Levenshtein Distance?

Devised in 1965 by a Russian scientist named Vladimir Levenshtein, Levenshtein distance (LD) is an algorithm designed to measure the similarity between two strings. The metric is also known as edit distance by people who cannot pronounce “Levenshtein,” and it is a measure of the smallest number of deletions, substitutions, and insertions required to transform one string into another, which we will refer to as source (s) and target (t), respectively. An edit is either deleting a character, substituting one character for another, or inserting a character.

Levenshtein distance is a [theta](m × n) algorithm, where m and n are the lengths of the strings, that computes the edit distance in time proportional to the length of the source times the length of the target. The greater the edit distance, the more differences are present between the two strings.

Given:

(s) = "Apple"
(t) = "Apple"
LD(s, t) = 0 because both s and t are identical.

(s) = "Apple"
(t) = "Apples"
LD(s, t) = 1 because one insertion is required to transform s into t.

(s) = "Apple"
(t) = "Ape"
LD(s, t) = 2 because one substitution and deletion are required to transform s
into t.

This algorithm has been used in DNA analysis, plagiarism detection, speech recognition, document versioning systems, and spell checking. Levenshtein distance is also known as a generalization of Hamming distance, where only substitutions are handled and both strings are of equal length.

One of the biggest drawbacks to the Levenshtein distance is the expensive memory requirements of the algorithm. While the traditional matrix-based approach is extremely precise, it is only practical to use on small datasets. As soon as the algorithm is used on a large dataset, the exorbitant amount of memory and comparisons quickly rules out the traditional Levenshtein implementation for our purposes. For example, 50,000 bytes compared to a similar dataset will result in around 2.5 billion comparisons, and roughly 250 MB of memory.

Using Levenshtein distance would not completely satisfy our requirements, although the algorithm definitely influenced the solution I’m presenting in this chapter. The full source code will not be discussed in this chapter in order to make the text easier to follow, but the Companion Web site has the full source with comments. The only methods discussed in this chapter are extracted from the core logic of the differencing engine. Please refer to the source code for a stronger understanding of the implementation itself.

Generating a Difference List

The main processing component of the differencing engine is the logic that compares two source buffers and outputs a sequence of transformations that can transform source data into target data. The following methods in this section compose the majority of the central logic of the engine provided with this chapter.

The following method is roughly the starting point for building the difference list. A PatchOperation object is passed in that holds the source and target data buffers. This method initializes a few public properties and fires the PatchOperation off to the ProcessRange method. Later on, the PatchOperation is passed into BuildDifferencesList that finalizes all the difference states into the transformation sequence.

private static PatchOperation Process(PatchOperation operation)
{

Note

The source code for this method is available on the Companion Web site.

}

The following method processes a range of data in the source and target buffers to pull out any similarities in the data. This method mainly tries to find data that does not change between the source and target buffers to reduce the overall data size of the transformation sequence.

private static void ProcessRange(PatchOperation operation,
                                 int targetStart,
                                 int targetEnd,
                                 int sourceStart,
                                 int sourceEnd)
{

Note

The source code for this method is available on the Companion Web site.

}

The following method performs the actual extraction of match data within a specified range.

private static void GetLongestSourceMatch(PatchOperation operation,
                                          PatchState state,
                                          int targetStart,
                                          int targetEnd,
                                          int sourceStart,
                                          int sourceEnd)
{

Note

The source code for this method is available on the Companion Web site.

}

The following method compares the specified ranges in the source and target buffers to determine the length of a match.

private static int GetSourceMatchLength(PatchOperation operation,
                                        int target,
                                        int source,
                                        int maxLength)
{

Note

The source code for this method is available on the Companion Web site.

}

The following method builds the PatchDifference list based on the results in the PatchOperation object.

private static List<PatchDifference> BuildDifferencesList(PatchOperation operation)
{

Note

The source code for this method is available on the Companion Web site.

}

The following method accepts the results of the sequence compilation and builds differencing objects that will eventually be serialized into the XML transform document.

private static bool RecordDifference(List<PatchDifference> result,
                                     int targetStart,
                                     int targetEnd,
                                     int sourceStart,
                                     int sourceEnd)
{

Note

The source code for this method is available on the Companion Web site.

}

Transforming Data Using a Difference List

Once a listing of all the differences between the two datasets is generated, there are a couple more steps we must take in order to have all the data required to perform a transformation.

We must first extract the modified bytes from the new dataset and store them with the difference list. In order to store the data, we must alter the PatchDifference object to include an array of bytes specific to the type of change that will be performed.

There will obviously be no data needed for the NoChange and Delete difference types.

A new function will be added to loop through the difference list and extract the modified bytes, storing them in each PatchDifference object. This function is described in the following code:

public static PatchDifferenceData[] BuildDifferenceData(PatchOperation operation)
{

Note

The source code for this method is available on the Companion Web site.

}

The preceding function loops through the difference list, and for insertion and substitution types, specifies the data associated with the change. We are left with an array of PatchDifference objects containing the data related to the change. At this point, the array can be easily serialized into an XML document for an external system to use, or we can go one step further and build a complete patching engine. The example provided with this chapter shows how to do both methods.

In order to successfully transform the old file using the difference data, a particular sequence of changes must be used. You must remember that the slightest error when transforming the old data will corrupt the file and make it generally unusable. The first change to do is substitution because it does not require any bytes to be added or removed from the source data, only a direct byte swap.

You want to loop through the difference list once for each difference type. First, loop and process all the substitution types. Remember that the PatchDifference object contains the target index in the resultant data, and the data to copy.

After substitution, the next difference type to process is deletion of data. Because the algorithm does not take other deletion changes into consideration, the data offsets are incorrect after the first deletion. To solve this, we must define a simple offset counter that is incremented by the number of bytes removed each time a deletion occurs. This offset is subtracted from the offset specified in the difference list so that the correct data is deleted and no buffer overflow occurs.

Lastly, the insertion type is processed to allocate memory for new data being copied into the resultant buffer. The first step is to allocate a block of memory at the correct index, with the size specified by the PatchDifference object. Once the memory exists, the insertion data can be safely copied over.

System.Collections.Generic.List<byte> is an excellent choice to store the resultant data because it has many methods that allow for inserting and deleting a range of elements at a specific index.

The functions in the following code illustrate the process described in the preceding text for transforming data using the difference list.

public byte[] MergeDifferences(PatchDifferenceData[] differenceData)
{

Note

The source code for this method is available on the Companion Web site.

}
private List<byte> MergeDifferencePass(PatchDifferenceData[] differenceData,
                                       List<byte> result,
                                       PatchDifferenceType type)
{

Note

The source code for this method is available on the Companion Web site.

}

After applying the preceding changes, the old data should have been correctly transformed into the new data. The Companion Web site shows an example XML document of how a transformation sequence can look.

Thoughts for Usability and Deployment

A major fault with the algorithm discussed occurs when the source and target files are hardly similar, and very few duplicate byte sequences are found. This leads to many recursion and comparison calls. This algorithm works great on datasets that have a lot of similarities because large sequential patterns allow us to ignore processing them, but this is very slow in the opposite situation.

One method of preventing this issue is to make sure that both files selected are modifications of each other. This is hard to detect when building a patch, but after the patch is created, you can add in a pretty decent failsafe.

In Chapter 19, “Implementing a Checksum to Protect Data Integrity,” an algorithm was discussed that generated a number based on the byte values in a memory buffer. This value can be used to detect file corruption and tampering. The same technique can also be used to be certain the file we are trying to patch is the correct one.

A checksum can be generated based on the source file and then serialized into the difference list. When a file is selected to be patched, you can generate a checksum for it, and only patch the file if the generated checksum matches the value saved in the difference list.

Employing this method of file verification solves a couple of issues. The obvious reason is to stop users from patching the wrong files, thus preventing the algorithm flaw discussed previously. Another benefit of using a checksum is that any files transmitted over the Internet can be verified for integrity.

Aside from source file verification, a reliable distribution system must also be in place so that users have access to the latest patch files for the software product. As previously discussed, an excellent transport medium for software updates is the Internet. A front end could query for the latest version, download the latest patch, and apply it to an older version of the product.

Compression could also be used on the binary data, which would be a great enhancement to the patching system because the resultant data size would be decreased even more. .NET 2.0 introduced the System.IO.Compression namespace that can accomplish this.

Lastly, another consideration for update generation is the modular structure of your software. For example, if your software has external modules or assemblies that are linked into the application at runtime, any updates specific to that library can be issued in a patch specific to just that file. If you have any modules of code that will change quite often, a general rule of thumb would be to place them in an external library and run updates against that file alone.

Conclusion

Aside from building your own patch generator, some third-party tools are readily available to do the job for you as well. Microsoft InstallShield is one of these tools that will automatically build an update between two versions of data and offer a method of transitioning between them. Another excellent deployment tool introduced with .NET 2.0 is ClickOnce; be sure to check it out.

Why would you want to make your own? Having your own patch creation and deployment system offers a lot of flexibility, and allows you to build your own front-end and deployment process that users will access. There are also no licensing fees involved since no third-party software is required.

The accessibility and deployment of software updates is an important aspect of software development, especially in regards to publicly available tools. If a program flaw causes the tool to produce corrupt game content, the program is useless until it has been updated to produce correct data. By employing a system similar to the one discussed in this chapter, users will be able to update a tool without losing precious development time.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.21.244.217