Chapter 16. Advanced Debugging

Successful debugging is about asking the correct questions. Many (but not all) of those questions can be answered with the Microsoft Visual Studio Debugger. Some questions require the use of advanced debugging tools to find the answer. If an application deadlocks, what are the outstanding synchronization objects? When there is a memory leak, how much native memory has been allocated, compared to managed memory? Is a memory leak associated with a particular generation or with the large object heap? These and other questions cannot be answered by the Visual Studio Debugger, but they might be essential to resolving a problem quickly.

Effective debugging often is about having the correct tools. The .NET Framework includes a variety of debugging tools, such as the Son of Strike (SOS) debugging extension (SOS.dll), DbgClr, and CorDbg. Installing Visual Studio provides Spy++, Dependency Walker, OLE Viewer, and many other basic tools. Finally, a host of debugging tools can be downloaded from the Debugging Tools for Windows Web site (http://www.microsoft.com/whdc/devtools/debugging). WinDbg, Adplus, and GFlags are probably the most commonly used debugging tools available from this site. The tools at this site are updated periodically, and new versions should be downloaded on occasion. Finally, the Reliability and Performance Monitor and Windows Task Manager are distributed with the Microsoft Windows environment.

These tools are not intended to replace the Visual Studio Debugger. The first rule of debugging is to employ lightweight debugging before resorting to the heavy arsenal. The Visual Studio Debugger is ideal for initial debugging. As part of the Visual Studio Integrated Development Environment (IDE), the Visual Studio debugger is more convenient than WinDbg, offers a familiar user interface, and has superior documentation. Lightweight debugging consists of checking for uninitialized variables or parameters, errant loop counters, logic errors, and other basic problems. These items are most often the cause of simple bugs versus more dramatic circumstances.

The goal of debugging is to resolve abnormal error conditions. Program hangs, crashes, memory leaks, and unhandled exceptions are possible error conditions. Some abnormal conditions, primarily logic errors, do not generate exceptional events. For example, a program that reports incorrect results has a bug. It might not be as intrusive as an exception, but it is a bug nonetheless.

Debugging is conducted in three phases: discovery, analysis, and testing. The discovery phase is when data on the problem is gathered. During this stage, you can capture the state of the application in a dump or perform live debugging. The analysis phase is when the abnormal condition is diagnosed using the results of the discovery phase. The testing phase validates the analysis phase and later validates the solution. Debugging is an iterative process. Based on the results of the testing phase, further discovery, analysis, and testing could be required.

Debugging can be invasive or noninvasive. For applications that should not be interrupted, noninvasive debugging, such as debugging production applications, is preferred. The advantage of noninvasive debugging is that the debuggee is not affected by the debugging process. Invasive debugging provides additional data and flexibility, but invasive debugging should be limited to the development environment.

You can debug running applications (live debugging) or perform postmortem analysis using a dump. With live debugging, breakpoints are essential. When a breakpoint is hit, you then can step through the application to verify program logic, monitor local variables, inspect the heap, watch the call stack, and perform other tasks. The opposite of live debugging is postmortem analysis. Postmortem analysis has some advantages over live debugging. You can create a dump and then debug at your convenience. Because a dump is a static snapshot, it preserves the history of a problem. But it also has some disadvantages. It is harder to pose future-tense questions with postmortem analysis. For example, what is the call stack after a future operation is performed? What is the effect of future iterations of a for loop? What is the impact of changing the value of a local variable? How is native memory trending versus managed memory? Finding these kinds of answers from a single dump is difficult.

Debugging a production application is different from debugging the debug build of an application. First, the constraints are not the same. The priority for debugging a production application is often to minimize downtime. For example, with a high-traffic retail Web site, the primary concern of the company might be lost revenue. Second, the production machine might lack debugging resources, such as symbols, source code, and debugging tools. This could make it difficult to debug the production application locally. Third, re-creating the abnormal condition could be problematic. Load factors, memory stress, and other conditions common to the production server could be hard to replicate in a developer environment. This could make it difficult to reproduce the problem consistently. Fourth, accessibility might be an issue. The production application might be offsite, in a locked server closet, or in some otherwise inconvenient location. This might necessitate remote debugging, which could entail setup and possible trust issues between machines. Finally, production applications are typically release builds with optimizations, which can make debugging less transparent. Conversely, debug builds normally are not optimized, but they are easier to debug.

This chapter presents different versions of the Store application. Each version demonstrates a different aspect of debugging. The Store application is included with the companion content for the book posted on the Web.

DebuggableAttribute Attribute

Just-in-time (JIT) optimizations are controlled by the DebuggableAttribute attribute. The DebuggableAttribute type contains the IsJITOptimizerDisabled and IsJITTrackingEnabled properties, which control JIT optimizations. If IsJITOptimizerDisabled is true, code is not optimized for release. If IsJITTrackingEnabled is true, the Common Language Runtime (CLR) tracks information that is helpful for debugging.

Configuring JIT optimizations, as discussed next, can aid in debugging a production (release version) application that is optimized for execution. Create an initialization file for the release application that sets optimization for debugging. The initialization file must be in the same directory as the application and named application.ini, where application is the name of the executable file. The initialization file has two entries in the .NET Framework Debugging Control section for optimizations. The GenerateTrackingInfo entry enables or disables tracking information. The AllowOptimize entry controls code optimization. In the initialization file, 1 is true and 0 is false. The following is an initialization file that disables both JIT optimizations:

[.NET Framework Debugging Control]
GenerateTrackingInfo=1
AllowOptimize=0

Problems that occur in a release version of a product sometimes might disappear in a debug build and vice versa. Optimizations can alter the resulting application subtly. These differences can cause the debug and the release versions to behave differently. An abnormal condition in a production application might disappear mysteriously in a debug version. This is a frustrating but not uncommon circumstance. For this reason, test extensively both the debug and the release versions of the product.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.130.199