CHAPTER 27
Closing the Holes: Mitigation

So, you have discovered a vulnerability in a piece of software. What now? The disclosure debate will always be around (see Chapter 3), but regardless of whether you disclose in public or to the vendor alone, there will be some time that elapses between discovery of a vulnerability and release of a corresponding patch or update that properly secures the problem. If you are using the software, what steps can you take to defend yourself in the meantime? If you are a consultant, what guidelines will you give your customers for defending themselves? This chapter presents some options for improving security during the vulnerability window that exists between discovery and correction of a vulnerability. We cover the following topics:

• Mitigation alternatives

• Patching

Mitigation Alternatives

More than enough resources are available that discuss the basics of network and application security. This chapter does not aim to enumerate all of the time-tested methods of securing computer systems. However, given the current state of the art in defensive techniques, we must emphasize that it remains difficult, if not impossible, to defend against a zero-day attack. When new vulnerabilities are discovered, we can only defend against them if we can prevent attackers from reaching the vulnerable application. All of the standard risk assessment questions should be revisited:

• Is this service really necessary? If not, turn it off.

• Should it be publicly accessible? If not, firewall it.

• Are all unsafe options turned off? If not, change the options.

And, of course, there are many others. For a properly secured computer or network, all of these questions should really already have been answered. From a risk management viewpoint, we balance the likelihood that an exploit for the newly discovered vulnerability will appear before a patch is available against the necessity of continuing to run the vulnerable service. It is always wisest to assume that someone will discover or learn of the same vulnerability we are investigating before the vulnerability is patched. With that assumption in mind, the real issue boils down to whether it is worth the risk to continue running the application, and if so, what defenses might be used. Port knocking and various forms of migration may be useful in these circumstances.

Port Knocking

Port knocking is a defensive technique that can be used with any network service but is most effective when a service is intended to be accessed by a limited number of users. An SSH or POP3 server could be easily sheltered with port knocking, while it would be difficult to protect a publicly accessible web server using the same technique. Port knocking is probably best described as a network cipher lock. The basic idea behind port knocking is that the port on which a network service listens remains closed until a user steps through a required knock sequence. A knock sequence is simply a list of ports that a user attempts to connect to before being granted permission to connect to the desired service. Ports involved in the knock sequence are generally closed, and a TCP/ UDP-level filter detects the proper access sequence before opening the service port for an incoming connection from the knocking computer. Because generic client applications are generally not capable of performing a knock sequence, authorized users must be supplied with custom client software or properly configured knocking software. This is the reason that port knocking is not an appropriate protection mechanism for publicly accessible services.

One thing to keep in mind regarding port knocking is that it doesn’t fix vulnerabilities within protected services in any way; it simply makes it more difficult to reach them. An attacker who is in a position to observe traffic to a protected server or who can observe traffic originating from an authorized client can obtain the knock sequence and utilize it to gain access to the protected service. Finally, a malicious insider who knows the knock sequence will always be able to reach the vulnerable service.

References

Port Knocking www.portknocking.org

“Port Knocking: Network Authentication Across Closed Ports”

(M. Krzywinski) SysAdmin Magazine, 12: 12–17 (2003)

Migration

Not always the most practical solution to security problems, but sometimes the most sensible, migration is well worth considering as a means of improving overall security. Migration paths to consider include moving services to a completely new operating system or completely replacing a vulnerable application with one that is more secure.

Migrating to a New Operating System

Migrating an existing application to a new operating system is usually only possible when a version of the application exists for the new operating system. In selecting a new operating system, we should consider those that contain features that make exploitation of common classes of vulnerabilities difficult or impossible. Many products exist that either include built-in protection methods or provide bolt-on solutions. Some of the more notable are

• ExecShield

• grsecurity

• Microsoft Windows 7 or Windows Server 2008

• OpenBSD

• Openwall Project

Any number of arguments, bordering on religious in their intensity, can be found regarding the effectiveness of each of these products. Suffice it to say that any protection is better than none, especially if you are migrating as the result of a known vulnerability. It is important that you choose an operating system and protection mechanism that will offer some protection against the types of exploits that could be developed for that vulnerability.

Migrating to a New Application

Choosing to migrate to an entirely new application is perhaps the most difficult route to take, for any number of reasons. Lack of alternatives for a given operating system, data migration, and impact on users are a few of the bigger challenges to be faced. In some cases, choosing to migrate to a new application may also require a change in host operating systems. Of course, the new application must provide sufficient functionality to replace the existing vulnerable application, but additional factors to consider before migrating include the security track record of the new application and the responsiveness of its vendor to security problems. For some organizations, the ability to audit and patch application source code may be desirable. Other organizations may be locked into a particular operating system or application because of mandatory corporate policies. The bottom line is that migrating in response to a newly discovered vulnerability should be done because a risk analysis determines that it is the best course of action. In this instance, security is the primary factor to be looked at, not a bunch of bells and whistles that happen to be tacked onto the new application.

References

ExecShield people.redhat.com/mingo/exec-shield/

grsecurity www.grsecurity.net

Microsoft Windows 7 and Windows Server 2008 www.microsoft.com

OpenBSD www.openbsd.org

Openwall Project www.openwall.com/Owl/

Patching

The only sure way to secure a vulnerable application is to shut it down or patch it. If the vendor can be trusted to release patches in an expeditious manner, we may be fortunate enough to avoid long periods of exposure for the vulnerable application. Unfortunately, in some cases vendors take weeks, months, or more to properly patch reported vulnerabilities, or worse yet, release patches that fail to correct known vulnerabilities, thereby necessitating additional patches. If we determine that we must keep the application up and running, it may be in our best interests to attempt to patch the application ourselves. Clearly, this will be an easier task if we have source code to work with, and this is one of the leading arguments in favor of the use of open source software. Patching application binaries is possible, but difficult at best. Without access to source code, you may feel it is easiest to leave it to the application vendor to supply a patch. Unfortunately, the wait leaves you high and dry and vulnerable from the discovery of the vulnerability to the release of its corresponding patch. For this reason, it is at least useful to understand some of the issues involved with patching binary images.

Source Code Patching Considerations

As mentioned earlier, patching source code is infinitely easier than patching at the binary level. When source code is available, users are afforded the opportunity to play a greater role in developing and securing their applications. The important thing to remember is that easy patching is not necessarily quality patching. Developer involvement is essential regardless of whether we can point to a specific line of source code that results in a vulnerability or the vulnerability is discovered in a closed source binary.

When to Patch

The temptation to simply patch our application’s source code and press on may be a great one. If the application is no longer actively supported and we are determined to continue using it, our only recourse will be to patch it up and move on. For actively supported software, it is still useful to develop a patch to demonstrate that the vulnerability can be closed. In any case, it is crucial that the patch that is developed fixes not only any obvious causes of the vulnerability, but also any underlying causes, and does so without introducing any new problems. In practice, this requires more than superficial acquaintance with the source code and remains the primary reason the majority of users of open source software do not contribute to its development. It takes a significant amount of time to become familiar with the architecture of any software system, especially one in which you have not been involved from the start.

What to Patch

Clearly, we are interested in patching the root cause of the vulnerability without introducing any additional vulnerabilities. Securing software involves more than just replacing insecure functions with their more secure counterparts. For example, the common replacement for strcpy()strncpy()—has its own problems that far too few people are aware of.


Image

NOTE

The strncpy() function takes as parameters source and destination buffers and a maximum number, n, of characters to copy. It does not guarantee null termination of its destination buffer. In cases where the source buffer contains n or more characters, no null-termination character will be copied into the destination buffer.


In many cases, perhaps the majority of cases, no one function is the direct cause of a vulnerability. Improper buffer handling and poor parsing algorithms cause their fair share of problems, as does the failure to understand the differences between signed and unsigned data. In developing a proper patch, it is always wise to investigate all of the underlying assumptions that the original programmer made regarding data handling and verify that each assumption is properly accounted for in the program’s implementation. This is the reason that it is always desirable to work in a cooperative manner with the program developers. Few people are better suited to understand the code than the original authors.

Patch Development and Use

When working with source code, the two most common programs used for creating and applying patches are the command-line tools diff and patch. Patches are created using the diff program, which compares one file to another and generates a list of differences between the two.

diff diff reports changes by listing all lines that have been removed or replaced between old and new versions of a file. With appropriate options, diff can recursively descend into subdirectories and compare files with the same names in the old and new directory trees. diff output is sent to standard out and is usually redirected in order to create a patch file. The three most common options to diff are

-a Causes diff to treat all files as text

-u Causes diff to generate output in “unified” format

-r Instructs diff to recursively descend into subdirectories

As an example, take a vulnerable program named rooted in a directory named hackable. If we created a secure version of this program in a directory named hackable_not, we could create a patch with the following diff command:


   diff -aur hackable/ hackable_not/ > hackable.patch

The following output shows the differences in two files, example.c and example_ fixed.c, as generated by the following command:


   # diff -au example.c example_fixed.c
   --- example.c 2004-07-27 03:36:21.000000000 -0700
   +++ example_fixed.c 2004-07-27 03:37:12.000000000 -0700
   @@ -6,7 +6,8 @@
   int main(int argc, char **argv) {
      char buf[80];
   - strcpy(buf, argv[0]);
   + strncpy(buf, argv[0], sizeof(buf));
   + buf[sizeof(buf) - 1] - 0;
     printf("This program is named %s ", buf);
    }

The unified output format is used and indicates the files that have been compared, the locations at which they differ, and the ways in which they differ. The important parts are the lines prefixed with + and . A + prefix indicates that the associated line exists in the new file but not in the original. A sign indicates that a line exists in the original file but not in the new file. Lines with no prefix serve to show surrounding context information so that patch can more precisely locate the lines to be changed.

patch patch is a tool that is capable of understanding the output of diff and using it to transform a file according to the differences reported by diff. Patch files are most often published by software developers as a way to quickly disseminate just that information that has changed between software revisions. This saves time because downloading a patch file is typically much faster than downloading the entire source code for an application. By applying a patch file to original source code, users transform their original source into the revised source developed by the program maintainers. If we had the original version of example.c used previously, given the output of diff shown earlier and placed in a file named example.patch, we could use patch as


   patch example.c < example.patch

to transform the contents of example.c into those of example_fixed.c without ever seeing the complete file example_fixed.c.

Binary Patching Considerations

In situations where it is impossible to access the original source code for a program, we may be forced to consider patching the actual program binary. Patching binaries requires detailed knowledge of executable file formats and demands a great amount of care to ensure that no new problems are introduced.

Why Patch?

The simplest argument for using binary patching can be made when a vulnerability is found in software that is no longer vendor supported. Such cases arise when vendors go out of business or when a product remains in use long after a vendor has ceased to support it. Before electing to patch binaries, migration or upgrade should be strongly considered in such cases; both are likely to be easier in the long run.

For supported software, it remains a simple fact that some software vendors are unresponsive when presented with evidence of a vulnerability in one of their products. Standard reasons for slow vendor response include “we can’t replicate the problem” and “we need to ensure that the patch is stable.” In poorly architected systems, problems can run so deep that massive reengineering, requiring a significant amount of time, is required before a fix can be produced. Regardless of the reason, users may be left exposed for extended periods—and unfortunately, when dealing with things like Internet worms, a single day represents a huge amount of time.

Understanding Executable Formats

In addition to machine language, modern executable files contain a large amount of bookkeeping information. Among other things, this information indicates what dynamic libraries and functions a program requires access to, where the program should reside in memory, and, in some cases, detailed debugging information that relates the compiled machine back to its original source. Properly locating the machine language portions of a file requires detailed knowledge of the format of the file. Two common file formats in use today are the Executable and Linking Format (ELF) used on many Unix-type systems, including Linux, and the Portable Executable (PE) format used on modern Windows systems. The structure of an ELF executable binary is shown in Figure 27-1.

The ELF header portion of the file specifies the location of the first instruction to be executed and indicates the locations and sizes of the program and section header tables. The program header table is a required element in an executable image and contains one entry for each program segment. Program segments are made up of one or more program sections. Each segment header entry specifies the location of the segment within the file, the virtual memory address at which to load the segment at runtime, the size of the segment within the file, and the size of the segment when loaded into memory. It is important to note that a segment may occupy no space within a file and yet occupy some space in memory at runtime. This is common when uninitialized data is present within a program.

The section header table contains information describing each program section. This information is used at link time to assist in creating an executable image from compiled object files. Following linking, this information is no longer required; thus, the section header table is an optional element (though it is generally present) in executable files. Common sections included in most executables are

• The .bss section describes the size and location of uninitialized program data. This section occupies no space in the file but does occupy space when an executable file is loaded into memory.

• The .data section contains initialized program data that is loaded into memory at runtime.

• The .text section contains the program’s executable instructions.

Many other sections are commonly found in ELF executables. Refer to the ELF specification for more detailed information.

Microsoft Windows PE files also have a well-defined structure, as defined by Microsoft’s Portable Executable and Common Object File Format Specification. While the physical structure of a PE file differs significantly from that of an ELF file, from a logical perspective, many similar elements exist in both. Like ELF files, PE files must detail the layout of the file, including the location of code and data, virtual address information, and dynamic linking requirements. By gaining an understanding of either one of these file formats, you will be well prepared to understand the format of additional types of executable files.

Image

Figure 27-1 Structure of an ELF executable file

Patch Development and Application

Patching an executable file is a nontrivial process. While the changes you wish to make to a binary may be very clear to you, the capability to make those changes may simply not exist. Any changes made to a compiled binary must ensure not only that the operation of the vulnerable program is corrected, but also that the structure of the binary file image is not corrupted. Key things to think about when considering binary patching include

• Does the patch cause the length of a function (in bytes) to change?

• Does the patch require functions not previously called by the program?

Any change that affects the size of the program will be difficult to accommodate and require very careful thought. Ideally, holes (or as Halvar Flake, CEO of Zynamics. com, terms them, “caves”) in which to place new instructions can be found in a binary’s virtual address space. Holes can exist where program sections are not contiguous in memory, or where a compiler or linker elects to pad section sizes up to specific boundaries. In other cases, you may be able to take advantage of holes that arise because of alignment issues. For example, if a particular compiler insists on aligning functions on double-word (8-byte) boundaries, then each function may be followed by as many as 7 bytes of padding. This padding, where available, can be used to embed additional instructions or as room to grow existing functions. With a thorough understanding of an executable file’s headers, it is sometimes possible to take advantage of the difference between an executable’s file layout and its eventual memory layout. To reduce an executable’s disk footprint, padding bytes that may be present at runtime are often not stored in the disk image of the executable. Using appropriate editors (PE Explorer is an example of one such editor for Windows PE files), it is often possible to grow a file’s disk image without impacting the file’s runtime memory layout. In these cases, it is possible to inject code into the expanded regions within the file’s various sections.

Regardless of how you find a hole, using the hole generally involves replacing vulnerable code with a jump to your hole, placing patched code within the hole, and finally jumping back to the location following the original vulnerable code. This process is shown in Figure 27-2.

Image

Figure 27-2 Patching into a file hole

Once space is available within a binary, the act of inserting new code is often performed using a hex editor. The raw byte values of the machine language, often obtained using an assembler program such as Netwide Assembler (NASM), are pasted into the appropriate regions in the file, and the resulting file is saved to yield a patched executable. It is important to remember that disassemblers such as IDA Pro are not generally capable of performing a patch operation themselves. In the case of IDA Pro, while it will certainly help you develop and visualize the patch you intend to make, all changes that you observe in IDA Pro are simply changes to the IDA database and do not change the original binary file in any way. Not only that, but there is no way to export the changes that you may have made within IDA Pro back out to the original binary file. This is why assembly and hex editing skills are essential for anyone who expects to do any binary patching.

Once a patched binary has been successfully created and tested, the problem of distributing the binary remains. Any number of reasons exist that may preclude distribution of the entire patched binary, ranging from prohibitive size to legal restrictions. One tool for generating and applying binary patches is named Xdelta. Xdelta combines the functionality of diff and patch into a single tool capable of being used on binary files. Xdelta can generate the difference between any two files regardless of the type of those files. When Xdelta is used, only the binary difference file (the “delta”) needs to be distributed. Recipients utilize Xdelta to update their binaries by applying the delta file to their affected binary.

Limitations

File formats for executable files are very rigid in their structure. One of the toughest problems to overcome when patching a binary is finding space to insert new code. Unlike simple text files, you cannot simply turn on insert mode and paste in a sequence of assembly language. Extreme care must be taken if any code in a binary is to be relocated. Moving any instruction may require updates to relative jump offsets or require computation of new absolute address values.


Image

NOTE

Two common means of referring to addresses in assembly language are relative offsets and absolute addresses. An absolute address is an unambiguous location assigned to an instruction or to data. In absolute terms, you might refer to the instruction at location 12345. A relative offset describes a location as the distance from some reference location (often the current instruction) to the desired location. In relative terms, you might refer to the instruction that precedes the current instruction by 45 bytes.


A second problem arises when it becomes necessary to replace one function call with another. This may not always be easily achievable, depending on the binary being patched. Take, for example, a program that contains an exploitable call to the strcpy() function. If the ideal solution is to change the program to call strncpy(), then there are several things to consider. The first challenge is to find a hole in the binary so that an additional parameter (the length parameter of strncpy()) can be pushed on the stack. Next, a way to call strncpy() needs to be found. If the program actually calls strncpy() at some other point, the address of the strncpy() function can be substituted for the address of the vulnerable strcpy() function. If the program contains no other calls to strncpy(), then things get complicated. For statically linked programs, the entire strncpy() function would need to be inserted into the binary, requiring significant changes to the file that may not be possible to accomplish. For dynamically linked binaries, the program’s import table would need to be edited so that the loader performs the proper symbol resolution to link in the strncpy() function in the future. Manipulating a program’s import table is another task that requires extremely detailed knowledge of the executable file’s format, making this a difficult task at best.

Binary Mutation

As discussed, it may be a difficult task to develop a binary patch that completely fixes an exploitable condition without access to source code or significant vendor support. One technique for restricting access to vulnerable applications while awaiting a vendor-supplied patch is port knocking, discussed earlier in the chapter. A drawback to port knocking is that a malicious user who knows the knock sequence can still exploit the vulnerable application. In this section, we discuss an alternative patching strategy for situations in which you are required to continue running a vulnerable application. The essence of this technique is to generate a patch for the application that changes its characteristics just enough that the application is no longer vulnerable to the same “mass market” exploit that is developed to attack every unpatched version of the application. In other words, the goal is to mutate or create genetic diversity in the application such that it becomes resistant to standard strains of malware that seek to infect it. It is important to note that the patching technique introduced here makes no effort to actually correct the vulnerable condition; it simply aims to modify a vulnerable application sufficiently to make standard attacks fail against it.

Mutations Against Stack Overflows

In Chapter 11, you learned about the causes of stack overflows and how to exploit them. In this section, we discuss simple changes to a binary that can cause an attacker’s working exploit to fail. Recall that the space for stack-allocated local variables is allocated during a function prolog by adjusting the stack pointer upon entry to that function. The following shows the C source code for a function badCode(), along with the x86 prolog code that might be generated for badCode():


   void badCode(int x) {
      char buf[256];
      int i, j;
      //body of badCode here
   }
   ; generated assembly prologue for badCode
   badCode:
      push ebp
      mov  ebp, esp
      sub  esp, 264

Here, the statement that subtracts 264 from esp allocates stack space for the 256-byte buffer and the two 4-byte integers i and j. All references to the variable at [ebp-256] refer to the 256-byte buffer buf. If an attacker discovers a vulnerability leading to the overflow of the 256-byte buffer, she can develop an exploit that copies at least 264 bytes into buf (256 bytes to fill buf, 4 bytes to overwrite the saved ebp value, and an additional 4 bytes to control the saved return address) and gain control of the vulnerable application. Figure 27-3 shows the stack frame associated with the badCode() function.

Mutating this application is a simple matter of modifying the stack layout in such a way that the location of the saved return address with respect to the start of the buffer is something other than the attacker expects. In this case, we would like to move buf in some way so that it is more than 260 bytes away from the saved return address. This is a simple two-step process. The first step is to make badCode() request more stack space, which is accomplished by modifying the constant that is subtracted from esp in the prolog. For this example, we choose to relocate buf to the opposite side of variables i and j. To do this, we need enough additional space to hold buf and leave i and j in their original locations. The modified prolog is shown in the following listing:


   ; mutated assembly prologue for badCode
   badCode:
      push ebp
      mov  ebp, esp
      sub  esp, 520

The resulting mutated stack frame can be seen in Figure 27-4, where we note that the mutated offset to buf is [ebp-520].

The final change required to complete the mutation is to locate all references to [ebp-256] in the original version of badCode() and update the offset from ebp to reflect the new location of buf at [ebp-520]. The total number of bytes that must be changed to effect this mutation is one for the change to the prolog plus one for each location that references buf. As a result of this particular mutation, the attacker’s 264-byte overwrite falls far short of the return address she is attempting to overwrite. Without knowing the layout of our mutated binary, the attacker can only guess why her attack has failed; hopefully, she will assume that our particular application is patched, leading her to move on to other, unpatched victims.

Image

Figure 27-3 Original stack layout

Image

Figure 27-4 Mutated stack layout

Note that the application remains as vulnerable as ever. A buffer of 528 bytes will still overwrite the saved return address. A clever attacker might attempt to grow her buffer by incrementally appending copies of her desired return address to the tail end of her buffer, eventually stumbling across a proper buffer size to exploit our application. However, as a final twist, it is worth noting that we have introduced several new obstacles that the attacker must overcome. First, the location of buf has changed enough that any return address chosen by the attacker may fail to properly land in the new location of buf, thereby causing her to miss her shellcode. Second, the variables i and j now lie beneath buf and will both be corrupted by the attacker’s overflow. If the attacker’s input causes invalid values to be placed into either of these variables, we may see unexpected behavior in badCode(), which may cause the function to terminate in a manner not anticipated by our attacker. In this case, i and j behave as makeshift stack canaries. Without access to our mutated binary, the attacker will not understand that she must take special care to maintain the integrity of both i and j. Finally, we could have allocated more stack space in the prolog by subtracting 536 bytes, for example, and relocating buf to [ebp-527]. The effect of this subtle change is to make buf begin on something other than a 4-byte boundary. Without knowing the alignment of buf, any return address contained in the attacker’s input is not likely to be properly aligned when it overwrites the saved return address, which again will lead to failure of the attacker’s exploit.

The preceding example presents merely one way in which a stack layout may be modified in an attempt to thwart any automated exploits that may appear for our vulnerable application. You must remember that this technique merely provides security through obscurity and should never be relied upon as a permanent fix to a vulnerability. The only goal of a patch of this sort should be to allow an application to run during the time frame between disclosure of a vulnerability and the release of a proper patch by the application vendor.

Mutations Against Heap Overflows

Like stack overflows, successful heap overflows require the attacker to have an accurate picture of the memory layout surrounding the vulnerable buffer. In the case of a heap overflow, the attacker’s goal is to overwrite heap control structures with specially chosen values that will cause the heap management routines to write a value of the attacker’s choosing into a location of the attacker’s choosing. With this simple arbitrary write capability, an attacker can take control of the vulnerable process. To design a mutation that prevents a specific overflow attack, we need to cause the layout of the heap to change to something other than what the attacker will expect based on his analysis of the vulnerable binary. Since the entire point of the mutations we are discussing is to generate a simple patch that does not require major revisions of the binary, we need to come up with a simple technique for mutating the heap without requiring the insertion of new code into our binary. Recall that we performed a stack buffer mutation by modifying the function prolog to change the size of the allocated local variables. For heap overflows, the analogous mutation would be to modify the size of the memory block passed to malloc/new when we allocate the block of memory that the attacker expects to overflow. The basic idea is to increase the amount of memory being requested, which in turn will cause the attacker’s buffer layout to fall short of the control structures he is targeting. The following listing shows the allocation of a 256-byte heap buffer:


   ; allocate a 256 byte buffer in the heap
      push 256
      call malloc

Following allocation of this buffer, the attacker expects that heap control structures lie anywhere from 256 to 272 bytes into the buffer. If we modify the preceding code to the following,


   ; allocate a 280 byte buffer in lieu of a 256 byte buffer
      push 280
      call malloc

then the attacker’s assumptions about the location of the heap control structure become invalid and his exploit becomes far more likely to fail. Heap mutations become somewhat more complicated when the size of the allocated buffer must be computed at runtime. In these cases, we must find a way to modify the computation in order to compute a slightly larger size.

Mutations Against Format String Exploits

Like stack overflows, format string exploits require the attacker to have specific knowledge of the layout of the stack. This is because the attacker requires pointer values to fall in very specific locations in the stack in order to achieve the arbitrary write capability that format string exploits offer. As an example, an attacker may rely on indexed parameter values such as “%17$hn” (refer to Chapter 12 for format string details) in her format string. Mutations to mitigate format string vulnerability rely on the same layout modification assumptions that we have used for mitigating stack and heap overflows. If we can modify the stack in a way that causes the attackers’ assumptions about the location of their data to become invalid, then it is likely to fail. Consider the function bar() and a portion of the assembly language generated for it in the following listing:


   void bar() {
      char local_buf[1024];
      //now fill local_buf with user input
      ...
      printf(local_buf);
   }
   ; assembly excerpt for function bar
   bar:
      push ebp
      mov ebp, esp
      sub esp, 1024 ; allocates local_buf
      ;do something to fill local_buf with user input
      ...
      lea eax, [ebp-1024]
      push eax
      call printf

Clearly, this contains a format string vulnerability, since local_buf, which contains user-supplied input data, will be used directly as the format string in a call to printf(). The stack layout for both bar() and printf() is shown in Figure 27-5.

Figure 27-5 shows that the attacker can expect to reference elements of local_buf as parameters 1$ through 256$ when constructing her format string. By making the simple change shown in the following listing, allocating an additional 1024 bytes in bar’s stack frame, the attacker’s assumptions will fail to hold and her format string exploit will, in all likelihood, fail:


   ; Modified assembly excerpt for function bar
   bar:
      push ebp
      mov ebp, esp
      sub esp, 2048 ; allocates local_buf and padding
      ;do something to fill local_buf with user input
      ...
      lea eax, [ebp-1024]
      push eax
      call printf

The reason this simple change will cause the attack to fail can be seen upon examination of the new stack layout, shown in Figure 27-6.

Note how the extra stack space allocated in bar’s prolog causes the location of local_buf to shift from the perspective of printf(). Values that the attacker expects to find in locations 1$ to 256$ are now in locations 257$ through 512$. As a result, any assumptions the attacker makes about the location of her format string become invalid and the attack fails.

As with the other mutation techniques, it is essential to remember that this type of patch does not correct the underlying vulnerability. In the preceding example, function bar() continues to contain a format string vulnerability that can be exploited if the attacker has proper knowledge of the stack layout of bar(). What has been gained, however, is some measure of resistance to any automated attacks that might be created to exploit the unpatched version of this vulnerability. It cannot be stressed enough that this should never be considered a long-term solution to an exploitable condition and that a proper, vendor-supplied patch should be applied at the earliest possible opportunity.

Image

Figure 27-5 printf() stack layout 1

Image

Figure 27-6 printf() stack layout 2

Third-Party Patching Initiatives

Every time a vulnerability is publicly disclosed, the vendor of the affected software is heavily scrutinized. If the vulnerability is announced in conjunction with the release of a patch, the public wants to know how long the vendor knew about the vulnerability before the patch was released. This is an important piece of information, as it lets users know how long the vendor left them vulnerable to potential zero-day attacks. When vulnerabilities are disclosed prior to vendor notification, users of the affected software demand a rapid response from the vendor so that they can get their software patched and become immune to potential attacks associated with the newly announced vulnerability. As a result, vendor response time has become one of the factors that some use to select which applications might best suit their needs. In some cases, vendors have elected to regulate the frequency with which they release security updates. Microsoft, for example, is well known for its “Patch Tuesday” process of releasing security updates on the second Tuesday of each month. Unfortunately, astute attackers may choose to announce vulnerabilities on the following day in an attempt to assure themselves of at least a one-month response time.

In response to perceived sluggishness on the part of software vendors where patching vulnerabilities is concerned, several third-party security patches have been made available following the disclosure of vulnerabilities. This trend seems to have started with Ilfak Guilfanov, the author of IDA Pro, who released a patch for the Windows WMF exploit in late December 2005. It is not surprising that Microsoft recommended against using this third-party patch. What was surprising was the endorsement of the patch by the SANS Internet Storm Center. With such contradictory information, what is the average computer user going to do? This is a difficult question that must be resolved if the idea of third-party patching is ever to become widely accepted. Nonetheless, in the wake of the WMF exploit, additional third-party patches have been released for more recent vulnerabilities. Several years ago, we also saw the formation of a group of security professionals into the self-proclaimed Zeroday Emergency Response Team (ZERT), whose goal is the rapid development of patches in the wake of public vulnerability disclosures. Finally, in response to one of the bug-a-day efforts dubbed the “Month of Apple Bugs,” former Apple developer Landon Fuller ran his own parallel effort, the “Month of Apple Fixes.” The net result for end users, sidestepping the question of how a third party can develop a patch faster than an application vendor, is that, in some instances, patches for known vulnerabilities may be available long before application vendors release official patches. However, exercise extreme caution when using these patches because you can’t expect vendor support should such a patch have any harmful side effects.

References

diff www.gnu.org/software/diffutils/diffutils.html

(Microsoft Portable Executable and Common Object File Format

Specification” www.microsoft.com/whdc/system/platform/firmware/PECOFF.mspx

Month of Apple Bugs (Lance M. Havok and Kevin Finisterre) projects.info-pull.com/moab/

Month of Apple Fixes (Landon Fuller) landonf.bikemonkey.org/code/macosx/ patch savannah.gnu.org/projects/patch

“Tool Interface Standard (TIS) Executable and Linking Format (ELF) Specification, Version 1.2” (TIS Committee) refspecs.freestandards.org/elf/elf.pdf

“Windows WMF Metafile Vulnerability HotFix” (Ilfak Guilfanov) hexblog.com/?p=21

Xdelta code.google.com/p/xdelta/

Zeroday Emergency Response Team (ZERT) www.isotf.org/zert/

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.78.137