Chapter 8. Performance Tuning, Configuration, and Debugging

 

"It's easy to cry 'bug' when the truth is that you've got a complex system and sometimes it takes a while to get all the components to co-exist peacefully."

 
 --Doug Vargas

Even though .NET 4 makes developing complex systems a simpler task, the fact remains that a CMS can be a very resource-hungry entity. The worst place to discover a performance problem is in a production environment when the web servers stop responding. In this chapter, we'll look at some ways to ensure that users are getting the best possible experience from the system, explore how to establish baseline performance metrics to see the effects of new code over time, and look at the new debugging features of Visual Studio 2010 that can make tracking down problems easier and faster.

The CMS Definition of Performance

Performance tuning is a deceptively deep and nuanced set of tasks. With regard to the CMS, we are concerned with two key factors: latency and throughput. These factors make up the performance landscape with regard to how users perceive our system and establishing the infrastructure requirements to support them.

Latency

To the end user, latency is performance; it is the duration of time that it takes for an operation to complete. In CMS terms, the latency is the amount of time it takes for a page in the system to be loaded and delivered to the end user.

Tip

Jakob Nielsen, a well-known web usability consultant, found that for a user to consider an action to have occurred "instantly," the duration between user action and system response must be 1/10th of a second or faster. Financially, Amazon found that sales decreased 1 percent for every extra 1/10th of a second that a page took to load. You can read more about the findings at Nielsen's blog at http://www.useit.com/alertbox/timeframes.html and Microsoft's Experimentation Platform at http://exp-platform.com/Documents/IEEEComputer2007OnlineExperiments.pdf.

Unfortunately, this duration can be subject to many factors that are out of our hands, such as geographic location of the user compared to the server, network performance on the client side, and so on. In this way, latency can be thought of not only as the time required to complete a single task but also as the time required to complete subtasks that make up larger ones.

We can mitigate these issues to a reasonable degree by ensuring a competent mix of efficient coding, proper deallocation and release of resources, and a well-designed caching system that operates at multiple levels of the system.

Throughput

The other half of CMS performance is throughput, which is the number of successful operations the system can complete in a given unit of time.

For example, a large number of users requesting a poorly coded page that leaks resources will quickly bog the server down, reducing the number of successful page deliveries that the server can handle per second. As the throughput decreases, the latency increases, and pages arrive to clients after longer and longer durations until the site eventually crashes altogether.

Establishing Baselines

The purpose of establishing a baseline set of metrics for site performance is to evaluate changes in that performance (be they good or bad) over time; this comparison is called benchmarking. For this reason, it is important that the baseline remain consistent and unchanging over time.

Note

In what sorts of situations would you need to reestablish a baseline? Perhaps instead of running on a single machine, the site now operates behind a load-balancer across five machines (or vice versa). Maybe a reverse proxy has been added, or the network connection has been up- or downgraded. As some fundamental conditions have changed, the baseline needs to be established again for the benchmark metrics to be meaningful.

Component vs. System Baselines

In terms of measuring latency, we've noted that the measurement can be discerned with regard to tasks or their smaller subtasks. In CMS terms, that means that we can establish baselines at the system (or page) level as well as the embeddable level, and the many shades in between.

Given that the controls that go into the creation of a CMS page can be so variable, for this chapter we will focus primarily on establishing system baselines while highlighting opportunities for testing more discrete components.

The Web Capacity Analysis Tool

Microsoft offers an excellent free tool called WCAT, short for "Web Capacity Analysis Tool." This application is designed to offer a wide variety of configuration options that can be used to simulate traffic conditions across the spectrum, from light to extreme, and provide useful (and clear) metrics that developers can use to track system performance over time.

Installing WCAT

WCAT is available from the Microsoft Download Center as part of the Internet Information Services 6.0 Resource Kit Tools. Despite the name, they will work just fine with IIS7 as well. The tools can be found at http://www.microsoft.com/downloads/details.aspx?FamilyID=56FC92EE-A71A-4C73-B628-ADE629C89499&displaylang=en.

Warning

Cassini, the built-in web server that comes with Visual Studio, is not a sufficient testing platform for this type of work; set up a site in IIS with a dedicated .NET 4 application pool for the CMS.

WCAT Concepts

Logically and physically, WCAT is divided into a controller and one or more clients. The controller is responsible for communicating with the clients and instructing them that they are permitted to begin making requests, as well as for processing data regarding the performance of the site in question to deliver useful metrics back to the user.

The WCAT clients abstract the idea of connections, allowing the creation of a large number of virtual clients that can speak to the server. For example, if two WCAT instances are created that are each configured for 50 users, the server will be working with 100 concurrent connections. In this way, WCAT divides the client concept into two halves: "physical clients" are the actual WCAT client instances, and "virtual clients" are created by the physical clients. This methodology allows for significant load to be placed on a website if desired.

Warning

For this chapter, I'm going to assume that you're running the CMS and WCAT from the same machine. It's possible (and preferable) to use at least two machines because the process is fairly resource-intensive, but it's also not realistic to assume that everyone has a variety of capable machines accessible. The concepts and methods are all the same, but for the numbers to be truly realistic, know that division across machines is a better solution.

Configurations

One half of the WCAT setup resides within the configuration file used for a specific test. This file is a plain-text file with settings on individual lines; it is used to define settings related to the controller and how it will behave.

Creating a configuration file for the CMS is very simple. Once the IIS 6.0 Resource Kit Tools are installed, there will be an entries for the client and controller under Start

Configurations
The WCAT controller command prompt

Figure 8-1. The WCAT controller command prompt

Note

If you are using a 32-bit version of Windows, the path will be Program Files rather than Program Files (x86).

Although you can save the configuration file anywhere you like, for now let's place it in the same folder as the WCAT controller executable. Using the text editor of your preference, create a file called cms_baseline.cfg that contains what is shown in Listing 8-1.

Example 8-1. The Sample WCAT Configuration for the CMS

Warmuptime 10s
Duration 60s
CooldownTime 20s
NumClientMachines 2
NumClientThreads 30

The parameters related to time are all specified in terms of seconds in Listing 8-1, but you are permitted to use minutes and hours if desired. For any WCAT test, there can be a warm-up period, a testing duration, and a cool-down period.

Warning

Similar to the caveats around multithreading and parallelism, performance testing is very subjective and specific to the conditions of the test. You may need only 5 seconds to adequately spin up the CMS on your localhost, whereas another machine may require 15. I have provided values that worked on my machine and should leave a margin of error, but establishing a baseline may require unique settings for your particular environment.

The warm-up period gives the server an opportunity to fully initialize your application. During the warm-up period, WCAT will add virtual clients until the maximum (NumClientMachines Ă— NumClientThreads) has been reached. Most .NET developers are familiar with the initial delay incurred by the Just-In-Time compiler as an application is loaded; this warm-up period helps to compensate. Anecdotally, 10 seconds seems to work fine on the applications I have tested.

The testing duration is exactly as it sounds; during this period, the server will be hammered with requests according to the scenario file we will create in the next step.

The cool-down period permits long-standing requests to finish executing, helping to ensure that the metrics provided are as accurate as possible. I typically allot approximately 20 seconds, which usually permits all the requests to terminate even when conditions on the server are strained.

Scenarios

The other half of the WCAT setup resides within the scenario file used for a specific test. This file is also a plain-text file with settings on individual lines; it is used to define settings related to the behavior of the WCAT clients. Examples of such settings are the page or resource being requested, the HTTP verb being used to make the request, and so on.

Open a WCAT client command prompt via the IIS resources. You should be presented with the screen shown in Figure 8-2.

The WCAT client command prompt

Figure 8-2. The WCAT client command prompt

In the same fashion as the configuration file, we will place the scenario file in the same folder as the WCAT controller executable. Using the text editor of your preference, create a file called cms_baseline_scenario.cfg that contains what is shown in Listing 8-2.

Example 8-2. The Sample WCAT Scenario for the CMS

SET Server = "localhost"
SET Port = 80
SET Verb = "GET"
Set KeepAlive = true

NEW TRANSACTION
classId = 1
Weight = 100

NEW REQUEST HTTP
URL = "/"

The settings in this file should be fairly straightforward; we will be making a request to localhost on port 80 using the GET verb, and we will reuse the same TCP connections (called persistent connection) to send and receive HTTP communications throughout the test. This transaction is the only one for this test and makes up 100 percent of the time being used. We will be requesting the CMS home page, located at http://localhost/.

Tip

In general, modern browsers support persistent HTTP connections as the creation of new ones is costly; however, they do time out automatically after a given period of inactivity to free up resources. Feel free to switch off the KeepAlive parameter to see the performance implications if a large number of clients arrive without supporting persistent connections.

Running a WCAT Test Against the CMS

With the configuration and scenario files created, we can run the test and see how the CMS performs as a baseline. From the WCAT controller window, type the following command:

wcctl -a localhost -c cms_baseline.cfg -s cms_baseline_scenario.cfg

This will spin up the WCAT controller, as shown in Figure 8-3. Note that the controller is waiting for 2 physical clients with 30 virtual clients each to establish communication before beginning the test.

The WCAT controller is awaiting client communication.

Figure 8-3. The WCAT controller is awaiting client communication.

If you do not already have two separate WCAT client command prompts open, you will need to open them. Once they are available, type the following command in each to start the clients:

wcclient localhost

Note

The test will not begin until the expected number of clients have connected successfully.

The client should notify you that it is awaiting instruction from the controller, as shown in Figure 8-4.

The WCAT client is connected and waiting for instruction to proceed.

Figure 8-4. The WCAT client is connected and waiting for instruction to proceed.

Interpreting Performance Results

When the test is complete, you should see notifications in both the client and controller indicating the statistics related to each. The aggregation is present in the controller, as shown in Figure 8-5.

Note

The page I tested was specifically created to have a moderate number of database-connected embeddable controls; it approximated the structure of a typical corporate home page.

The results of the WCAT test

Figure 8-5. The results of the WCAT test

Based on these results, there is good news, and there is bad news. The good news is that out of 885 total requests over 60 seconds, all of them returned a response of 200 OK, indicating no server errors or other problems. The bad news is that the server was processing only 18 requests per second with a data transfer rate of approximately 90 KB per second, which is altogether terribly low.

Improving CMS Performance with Caching

There are entire books devoted solely to the task of improving the performance of web applications (even those that are static and unchanging). The potential tweaks cover every aspect of the application, from the order that JavaScript is placed on a page to how images are stored on the server. Not all of them are difficult to implement or time-consuming; caching alone can make a tremendous difference in CMS performance.

HTTP.sys and the OutputCache

One of the simplest methods of improving performance is the application of an effective caching methodology. Implementing caching too early can often hide significant performance problems, but the best coding in the world will eventually hit a performance limit based on that in a given system, tasks X, Y, and Z take a specific amount of time to complete.

Caching infrequently changing pages can make a dramatic change to both server health and user experience by simply eliminating the execution of those steps and returning the results that had been generated by a previous request. We can handle and configure this at two levels in IIS: HTTP.sys and the .NET OutputCache.

Note

The use of a distributed system such as Memcached for storing CMS data differs from this type of caching in that Memcached is used to speed the retrieval of information in a request that by definition has not itself been cached. HTTP.sys and the OutputCache make up a higher level of caching that exists at the OS and server levels, respectively.

Introduced in IIS 6.0, the HTTP.sys driver is a kernel-mode device driver that listens for HTTP requests on the network and handles communication between IIS and the client; both IIS 6.0 and IIS 7.0 rely on this driver to process HTTP requests. Combining kernel-mode caching with .NET's OutputCache (which operates in user mode) results in a site that is better equipped to handle both spikes in traffic as well as consistently high load. The specific differences of kernel- vs. user-mode operation are discussed later in this chapter.

Tip

Without delving into a low-level discussion of execution modes, what does it mean to maintain a cache in user mode? The user-mode cache resides directly in the worker process associated with the application and is extremely fast as a result. It is usually best to combine user-and kernel-mode caching because kernel mode by definition cannot support features such as .NET authentication that require user-mode functionality. In cases where kernel-mode caching is enabled but the feature is unsupported, the content is served without being cached.

IIS 7.0 provides an OutputCache option where both user- and kernel-mode caching can be configured. Figure 8-6 shows this window.

Enabling user-mode and kernel-mode caching

Figure 8-6. Enabling user-mode and kernel-mode caching

Warning

In general, it's not a good idea to blindly cache everything as configured in Figure 8-6; for example, it makes little sense to devote resources to caching pages that get viewed once every few weeks. IIS 7.0 lets you customize how specific types of resources are cached at a very granular level, but for the purposes of this discussion, I want to show the two ends of the spectrum (nothing cached vs. everything cached).

Benchmarking CMS Performance

We previously established a baseline set of performance metrics for the CMS without any specific performance tweaks applied; the CMS was capable of delivering approximately 18 requests per second with a data rate of approximately 90 KB per second.

Now that we have applied a change with the intention of improving the performance of the system, we can rerun the same tests from earlier in the chapter and compare the results to learn whether the effective was positive, negative, or neutral.

Figure 8-7 shows the results of a subsequent run on the CMS.

Caching has significantly improved CMS performance.

Figure 8-7. Caching has significantly improved CMS performance.

With one small IIS tweak, requests to localhost increased to approximately 1170 per second, with a data transfer rate of approximately 5811 KB per second. From a performance perspective, the CMS is handling 65 times more requests and data (for a total of 58,246 successful page deliveries) in the same time period as before.

Granted, it's not necessarily the case that every single page can be cached until it changes; search pages, lists of user comments, and other types of content must be capable of updating regularly. With that in mind, those pages that cannot be cached will have significantly more resources available as a result of caching the infrequently changing ones.

Configuration Considerations

Beyond the large-scale performance issues, there are additional considerations related to configuration that can crop up while deploying the CMS to a production (or even test) environment. Let's explore a few of them, why they might represent concerns, and how to quickly address them.

Enable Release Mode for Production

When a .NET application is running debug mode, the compiler will insert additional instructions into the assembly that facilitate instructional breakpoints. Debug compilation results in a larger end file size and somewhat reduced performance compared to release mode, which inserts no additional instructions (and therefore will not trigger breakpoints of any kind within the Visual Studio IDE).

Listing 8-3 shows the IL that the compiler generates for the Execute() method of the Business.Scripting class; note the IL instruction nop, which denotes "no operation." When the nop instruction is hit, execution will halt, and the address will be updated to point to the return address so that program execution may continue. This operation provides a reliable location for the debugger to halt execution and will occur before the opening and closing of scope blocks, when calling methods, when accessing properties, and similar events.

Example 8-3. The Compiler Has Inserted nop Instructions as Placeholders

.method public hidebysig instance object
        Execute(string script) cil managed
{
  .param [0]
  .custom instance void [System.Core]System.Runtime.CompilerServices.DynamicAttribute::.ctor() = ( 01 00 00 00 )
  // Code size       32 (0x20)
  .maxstack  3
  .locals init ([0] object CS$1$0000)
  IL_0000:  nop
  .try
  {
    IL_0001:  nop
    IL_0002:  ldarg.0
    IL_0003:  ldfld      class [Microsoft.Scripting]Microsoft.Scripting.Hosting.ScriptEngine Business.Scripting.Scripting::_engine
    IL_0008:  ldarg.1
    IL_0009:  ldarg.0
    IL_000a:  ldfld      class [Microsoft.Scripting]Microsoft.Scripting.Hosting.ScriptScope Business.Scripting.Scripting::_scope
    IL_000f:  callvirt   instance object
[Microsoft.Scripting]Microsoft.Scripting.Hosting.ScriptEngine::Execute(string,
class [Microsoft.Scripting]Microsoft.Scripting.Hosting.ScriptScope)
    IL_0014:  stloc.0
    IL_0015:  leave.s    IL_001d
  }  // end .try
  catch [mscorlib]System.Object
  {
    IL_0017:  pop
    IL_0018:  nop
    IL_0019:  ldnull
    IL_001a:  stloc.0
    IL_001b:  leave.s    IL_001d
  }  // end handler
  IL_001d:  nop
  IL_001e:  ldloc.0
  IL_001f:  ret
} // end of method Scripting::Execute

In general, you'll want to make sure you switch the compilation mode to release before deploying an application to production. It's surprising how many production applications I've run across that are still set to run in debug mode. Having debugging instructions in the final assembly is a definite hit to performance even in low-volume situations; wasting cycles on nop instructions certainly doesn't help when the traffic really starts to roll in.

Note

The compiler will also produce additional optimizations beyond simply excluding nop instructions; the critical aspect of this is just understanding that the IL generated for debugging differs from release output.

Removing the Server, X-Powered-By, and X-AspNet-Version Headers

By default, IIS will append a variety of extra information to the HTTP response sent downstream to the client. This includes information related to the server, which version of .NET is running, and so on. Figure 8-8 shows a sample response from the CMS home page.

The HTTP response has additional information that could be attractive to hackers.

Figure 8-8. The HTTP response has additional information that could be attractive to hackers.

Note

The information in Figure 8-8 (and throughout this section) was retrieved with the Firebug extension to Mozilla Firefox; this extension is available for free at http://getfirebug.com/. If you prefer a different browser (or simply to operate outside of a browser altogether), you can use an external web debugging tool such as Fiddler, available at http://www.fiddler2.com/fiddler2/.

Although removing this information isn't going to have a gigantic impact on application performance, it does help isolate the specifics of your server from prying eyes who may seek to exploit weaknesses in particular software configurations. There is a tiny performance improvement in the long term because each response will have less information; this information also doesn't directly benefit or affect the end user.

Tip

In the next chapter, we'll be creating a system for friendly URLs that don't have file extensions or other system-specific materials in them. Although information such as the hidden VIEWSTATE field can still denote a site as running on IIS / .NET, every bit of security (or obscurity) helps.

Removing the X-Powered-By header is trivially simple; it resides under the HTTP Response Headers section of IIS 7.0 as shown in Figure 8-9. Simply right-click it and select Remove.

Removing the X-Powered-By header from the HTTP response

Figure 8-9. Removing the X-Powered-By header from the HTTP response

Removing the X-AspNet-Version header is also very simple; add the declaration from Listing 8-4 to the CMS web.config within the <system.web> section to do so.

Example 8-4. Removing the X-AspNet-Version Header via the Application's web.config File

<httpRuntime enableVersionHeader="false" />

The only remaining task is to remove the Server header, which unfortunately requires the use of an HTTP module. The CMS includes one, called ObscureHeader; the code for this is shown in Listing 8-5.

Example 8-5. Removing the Server Header via an HttpModule

using System;
using System.Web;

namespace ObscureHeader
{
    /// <summary>
    /// Removes the "Server" header from the HTTP response.
    /// If the CMS AppPool is running in Integrated Mode, this will run for all requests.
    /// </summary>
    public class RemoveServer : IHttpModule
    {
        public void Init(HttpApplication context)
        {
            context.PreSendRequestHeaders += RemoveServerFromHeaders;
        }

        private void RemoveServerFromHeaders(object sender, EventArgs e)
        {
            // strip the "Server" header from the current Response
            HttpContext.Current.Response.Headers.Remove("Server");
        }

        public void Dispose()
        {
            // no code necessary
        }
    }
}

Once registered in the <modules> section of <system.webServer> in the web.config file, as shown in Listing 8-6, this module will strip the final header from the HTTP response.

Example 8-6. Registering the HttpModule for the CMS

<system.webServer>
   <modules runAllManagedModulesForAllRequests="true">
      <add name="RemoveServer" type="ObscureHeader.RemoveServer" />
   </modules>
   ...
</system.webServer>

If we browse to the application now, we should see significantly cleaner HTTP headers that don't display as much identifying information about the application environment to the client. Figure 8-10 shows this output.

The final, cleaned HTTP response headers

Figure 8-10. The final, cleaned HTTP response headers

Debugging Concepts

Debuggers are a powerful weapon in the developer's arsenal. They allow dynamic analysis of an application, which refers in part to the ability to take control of the application execution. They also enable the inspection and modification of memory, the creation of breakpoints to halt the application and examine the current status, and so on. Debuggers form the backbone of legitimate development in addition to facilitating the discovery and exploitation of software flaws and vulnerabilities.

There are several different ways to divide debuggers into categories: two useful ones are white-box vs. black-box debuggers and user-mode vs. kernel-mode debuggers. We will look at each division and what defines it and then examine how .NET handles debugging in general.

White-Box vs. Black-Box Debuggers

A white-box debugger such as the one built into Visual Studio 2010 will have access to the source code to the application itself, and we as developers are aware of the implementation of the application at this level. The debugger therefore will have a high degree of information (and in the case of the .NET Framework, metadata) about the code itself and the environment in which it is running. White-box debuggers are common in integrated development environments and are typically fairly sophisticated because of how extensively they are wired into the IDE; for example, Visual Studio 2010's debugger actually allows you to share breakpoints with a developer on a separate machine, enabling a different developer to reproduce error conditions without being at your machine or screen sharing.

Tip

This ability to share breakpoints is a very useful new feature of Visual Studio 2010, covered later in this chapter in the "Collaborative Debugging" section.

By contrast, black-box debuggers are attached to running processes but do not have the actual source code to the application available. This is typically the case when debugging third-party code or while attempting to reverse engineer or exploit some piece of software. The black-box debugger still provides many of the features we've discussed thus far, although more experimentation and time is typically involved in the debugging effort because we would not have insight into the implementation of the application.

In our day-to-day development, we'll typically rely primarily on white-box debuggers, although it should be noted that the tools used in this chapter can also operate in a black-box fashion on code we did not implement ourselves.

User Mode vs. Kernel Mode

An important subdivision in debugger types is whether it is operating in user mode or kernel mode. For example, suppose we spin up an ASP .NET worker process and attach a debugger instance to it. The debugger and the worker process are both operating in user mode, which is a fairly protected mode; applications aren't capable of accessing hardware and memory directly. Code executing in this mode is required to use the hooks provided by operating system APIs to access resources.

Kernel mode is a much lower-level of operation and debugging; this is the realm in which device drivers and other software that require machine-level access to the CPU and memory generally operate (although they're not specifically required to). Code executing in kernel mode is given the highest level of implicit trust and is capable of executing CPU instructions directly as well as referencing any memory address in the system. Although unhandled exceptions in code running in user mode generally result in the crash of the application, unhandled exceptions at the kernel mode level trigger a crash in the system itself.

Tip

The HTTP.sys discussion earlier in the chapter noted that it was best to combine user- and kernel-mode caching because of the higher-level features that user mode provides compared to kernel-mode's low-level system access.

The x86 architecture maps these modes to a series of rings. Kernel mode is ring 0, where user mode is ring 3. This is a protection and isolation concept that exists to restrict levels of access to low-level resources and data; in this case, ring 0 is the lowest and most unrestricted level, while ring 3 is the highest level with the least "bare-metal" control of the hardware. Unless you're writing or debugging this type of software on a regular basis, you probably won't spend much (if any) time at this level.

Tip

You can find more information about the x86 architecture as it pertains to execution modes in the Intel Architecture Software Developer's Manual at http://download.intel.com/design/PentiumII/manuals/24319202.pdf. Section 4.5, "Privilege Levels," addresses the specifics of the ring hierarchy (including the purposes of the remaining rings).

Debugging in Visual Studio 2010 doesn't require you to become intimately familiar with concepts such as protection rings to be productive, but understanding some of the low-level details of the debugger and the system architecture will help clarify what's happening when things go wrong.

Historical Debugging via IntelliTrace

Normally, debugging via breakpoints is a somewhat one-way operation. By that I mean we have access to the application state as it currently exists, and we can examine the value that different variables contain at the moment the breakpoint was triggered. What has been lacking thus far is a convenient way to unroll the application execution and see the steps (in their state at an arbitrary time) that led us to the breakpoint condition.

One solution is to simply write what could quickly become excessive logging code, tracking and recording the information contained in memory locations. After executing the program, you could sift through those records in the hopes of finding what you're looking for. It's not a terribly efficient use of time, and it also presumes that the necessary data will be captured properly. Alternatively, you could set a breakpoint early in the application and single-step through to a potential trouble spot, taking note of potentially relevant information along the way.

I'm sure you could think of other methods that would be better or worse, depending on the nature of the application, but it's safe to say that these options are fairly tedious on anything beyond a trivial application; if a better way exists, it makes sense to utilize it. Visual Studio 2010 introduces IntelliTrace historical debugging, literally allowing us to step backward through the execution and unwind the state to a previous point.

For example, suppose that while viewing a CMS page you notice that a certain embeddable control is missing. The issue could exist in a number of places: perhaps the database record for that content is incorrect. It's possible that the control threw an exception and was simply not loaded. Having the capability to set a breakpoint and work backward is extraordinarily powerful, allowing developers to zero in on the problem far more quickly than was previously possible.

Because the additional debugging instructions are a serious knock to performance, the full features are disabled by default in the IDE; to access them, go to Tools

Historical Debugging via IntelliTrace

Once this setting has been applied, set a breakpoint, and start the application. When the breakpoint is hit, you will notice additional choices next to the red circle to the left side of the current line of code that facilitate navigation through the application, as shown in Figure 8-12.

Enabling IntelliTrace in the IDE

Figure 8-11. Enabling IntelliTrace in the IDE

The breakpoint now has debug navigation operations to the left of the code.

Figure 8-12. The breakpoint now has debug navigation operations to the left of the code.

Clicking the double up arrows will cause the application execution to move backward to the last valid event that IntelliTrace was able to capture, as demonstrated in Figure 8-13. The current line is highlighted in a lighter shade of maroon than the typical breakpoint.

Moving backward through the application execution

Figure 8-13. Moving backward through the application execution

If you select Debug

Moving backward through the application execution

Clicking any of the calls in this window expands it; the application state will automatically revert to that point in the execution history. Figure 8-15 shows the Page_Load() method as it was during this debugging session.

The IntelliTrace calls view allows navigation through the application execution history.

Figure 8-14. The IntelliTrace calls view allows navigation through the application execution history.

Exploring the Page_Load() method as it was executed during this session

Figure 8-15. Exploring the Page_Load() method as it was executed during this session

That's the core of IntelliTrace debugging in Visual Studio 2010. Having the capacity to unwind the application is a tremendous boon to developers; I'm sure many can sympathize with the horror stories of hard-to-track bugs that occur only in specific conditions when the moon is just right. With the IntelliTrace information recorded, you can highlight the specific moment things went off the rails and have a better shot of not only identifying the cause but doing so in a fraction of the time compared to the traditional methods.

Note

The previous iteration of the CMS had a delightful situation where IntelliTrace debugging would've been a gigantic help. When certain conditions occurred, the CMS would "lose" the site ID that helped to map content between friendly URLs and the site tree; as the object was considered to have been updated, this change was immediately stored to Memcached and dutifully retrieved, causing exceptions to be thrown every time users requested that page. Pinning the problem down involved a trek across multiple libraries with a range of conditional breakpoints set, trying to find the exact moment things went wrong. Being able to unwind from some known points would've saved a ridiculous number of hours fixing what turned out to be a simple bug that appeared only intermittently.

Collaborative Debugging

Microsoft identified room for improvement with regard to the debugging processes executed by most development teams: debugging has typically been a very solo venture—a single process debugged on a single machine by a single user only. Collaborative debugging, introduced in Visual Studio 2010, seeks to alleviate that problem.

Importing and Exporting Breakpoints

Collaborative debugging is expressed primarily through breakpoint sharing in Visual Studio 2010, and it is extremely simple to perform. First, set a breakpoint on a line; in the case of Figure 8-16, we've set it on the line that creates a new list of ScriptedFile objects for the content.aspx.cs page. For the purposes of this discussion, we'll assume there's some bug here that can be demonstrated and reproduced.

Once the breakpoint is set, right-click the line in question, and select Breakpoint

Importing and Exporting Breakpoints
Exporting a breakpoint

Figure 8-16. Exporting a breakpoint

The actual breakpoint file is simply XML that defines the code file and specific location of the breakpoint. Listing 8-7 shows the XML for the breakpoint we set on the ScriptedFile list assignment line; I have highlighted specific information for this condition.

Example 8-7. The Contents of a Typical Breakpoint XML File

<?xml version="1.0" encoding="utf-8"?>
<BreakpointCollection xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
  <Breakpoints>
    <Breakpoint>
      <Version>15</Version>
      <IsEnabled>1</IsEnabled>
      <IsVisible>1</IsVisible>
      <IsEmulated>0</IsEmulated>
      <IsCondition>0</IsCondition>
      <ConditionType>WhenTrue</ConditionType>
      <LocationType>SourceLocation</LocationType>
      <TextPosition>
        <Version>4</Version>
        <FileName>.Webcontent.aspx.cs</FileName>
        <startLine>28</startLine>
        <StartColumn>8</StartColumn>
        <EndLine>28</EndLine>
        <EndColumn>48</EndColumn>
        <MarkerId>0</MarkerId>
<IsLineBased>0</IsLineBased>
        <IsDocumentPathNotFound>0</IsDocumentPathNotFound>
        <ShouldUpdateTextSpan>1</ShouldUpdateTextSpan>
        <Checksum>
          <Version>1</Version>
          <Algorithm>00000000-0000-0000-0000-000000000000</Algorithm>
          <ByteCount>0</ByteCount>
          <Bytes />
        </Checksum>
      </TextPosition>
      <NamedLocationText>content.Page_Load(object sender, EventArgs e)</NamedLocationText>
      <NamedLocationLine>4</NamedLocationLine>
      <NamedLocationColumn>0</NamedLocationColumn>
      <HitCountType>NoHitCount</HitCountType>
      <HitCountTarget>1</HitCountTarget>
      <Language>3f5162f8-07c6-11d3-9053-00c04fa302a1</Language>
      <IsMapped>0</IsMapped>
      <BreakpointType>PendingBreakpoint</BreakpointType>
      <AddressLocation>
        <Version>0</Version>
        <MarkerId>0</MarkerId>
        <FunctionLine>0</FunctionLine>
        <FunctionColumn>0</FunctionColumn>
        <Language>00000000-0000-0000-0000-000000000000</Language>
      </AddressLocation>
      <DataCount>4</DataCount>
      <IsTracepointActive>0</IsTracepointActive>
      <IsBreakWhenHit>1</IsBreakWhenHit>
      <IsRunMacroWhenHit>0</IsRunMacroWhenHit>
      <UseChecksum>1</UseChecksum>
      <Labels />
      <RequestRemapped>0</RequestRemapped>
      <parentIndex>-1</parentIndex>
    </Breakpoint>
  </Breakpoints>
</BreakpointCollection>

In a separate environment, all that is required is for the developer to open the Breakpoints window and import the file into the IDE. Figure 8-17 demonstrates this; the import option is the red circle with the arrow on its top left quadrant (sixth from the left).

Tip

You can also bring up the Breakpoints window by pressing Ctrl+Alt+B.

Importing a breakpoint sets it automatically in the environment.

Figure 8-17. Importing a breakpoint sets it automatically in the environment.

Coupled with DataTip pinning and annotation, which we'll cover next, this permits developers to set breakpoints, note potentially troublesome areas of code, and then send that information for direct import into another environment by a different developer.

DataTip Pinning and Annotation

Sometimes during the course of debugging (or perhaps code reviews) we encounter variable names, assignments, or conditions that simply don't convey their intention clearly. Developers rely on comments to help explain tricky code segments or to clarify the business logic that goes into a specific set of operations. This is certainly a functional way to do it, but the code can quickly become cluttered if comments are the sole method of communication and conversation between developers.

Visual Studio 2010 introduces the concept of pinning DataTips to the code window, as well as annotating them. Consider Figure 8-18; here we have a breakpoint set on a particular line that for one reason or another isn't clear to us in terms of what's happening. If the mouse is hovered over the _scriptFiles variable, the DataTip we're used to seeing will appear, but it will have a pin icon to the right side. If that pin is clicked, the DataTip will attach itself to the code window for examination and annotation, as shown in the figure.

Pinning a variable to the code window for annotation

Figure 8-18. Pinning a variable to the code window for annotation

When the DataTip is pinned to the window, there will be a small control box to the right side of the tip itself when the mouse hovers over it. At the bottom of the box is an "Expand to see comments" option. Clicking this opens a small text box below the tip that holds the annotations, as shown in Figure 8-19.

Annotating a DataTip with comments or questions

Figure 8-19. Annotating a DataTip with comments or questions

DataTip annotations can also be exported in a similar fashion to breakpoints. Select Debug

Annotating a DataTip with comments or questions

Example 8-8. An Exported DataTip XML File

<SOAP-ENV:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:SOAP-
ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-
ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:clr="http://schemas.microsoft.com/soap/encoding/clr/1.0" SOAP-
ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
<SOAP-ENV:Body>
<a1:PersistableTipCollection id="ref-1"
xmlns:a1="http://schemas.microsoft.com/clr/nsassem/Microsoft.VisualStudio.Debugger.DebuggerToo
lWindows.DataTips.PinnableTips.Persistence/VsDebugPresentationPackage%2C%20Version%3D10.0.0.0%
2C%20Culture%3Dneutral%2C%20PublicKeyToken%3Db03f5f7f11d50a3a">
<Tips href="#ref-3"/>
</a1:PersistableTipCollection>
<SOAP-ENC:Array id="ref-3" SOAP-ENC:arrayType="xsd:anyType[1]">
<item href="#ref-4"/>
</SOAP-ENC:Array>
<a3:PinnedTip id="ref-4"
xmlns:a3="http://schemas.microsoft.com/clr/nsassem/Microsoft.VisualStudio.Debugger.DebuggerToo
lWindows.DataTips.PinnableTips.UI/VsDebugPresentationPackage%2C%20Version%3D10.0.0.0%2C%20Cult
ure%3Dneutral%2C%20PublicKeyToken%3Db03f5f7f11d50a3a">
<unopenedState href="#ref-5"/>
<innerTip href="#ref-6"/>
</a3:PinnedTip>
<a1:UnopenedTipData id="ref-5"
xmlns:a1="http://schemas.microsoft.com/clr/nsassem/Microsoft.VisualStudio.Debugger.DebuggerToo
lWindows.DataTips.PinnableTips.Persistence/VsDebugPresentationPackage%2C%20Version%3D10.0.0.0%
2C%20Culture%3Dneutral%2C%20PublicKeyToken%3Db03f5f7f11d50a3a">
<PinnedPosition>782</PinnedPosition>
<Length>0</Length>
<XOffset>361</XOffset>
<RelativeFileName id="ref-7">Webcontent.aspx.cs</RelativeFileName>
<AbsoluteFileName id="ref-
8">C:Workcms_2010	runkContentManagementWebcontent.aspx.cs</AbsoluteFileName>
</a1:UnopenedTipData>
<a3:Tip id="ref-6"
xmlns:a3="http://schemas.microsoft.com/clr/nsassem/Microsoft.VisualStudio.Debugger.DebuggerToo
lWindows.DataTips.PinnableTips.UI/VsDebugPresentationPackage%2C%20Version%3D10.0.0.0%2C%20Cult
ure%3Dneutral%2C%20PublicKeyToken%3Db03f5f7f11d50a3a">
<identity.Moniker id="ref-
9">C:Workcms_2010	runkContentManagementWebcontent.aspx.cs</identity.Moniker>
<identity.Position>790</identity.Position>
<watchItemCount>1</watchItemCount>
<watchItem0 id="ref-10">_scriptFiles</watchItem0>
<comments id="ref-11">What&#39;s this for?</comments>
<showingComments>true</showingComments>
<uniqueIdentity id="ref-12">{a160d366-698f-41af-9e4e-f7cd50027a23}</uniqueIdentity>
</a3:Tip>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>

Note

Pinned DataTips are visible only while the application is being debugged. They will disappear when not debugging and reflect the value of the variable based on the last time (if any) it was modified.

Summary

The performance of a system has a very real set of implications for user behavior and potentially revenue, depending on the application of that system. As such, having firm data as well as a response plan for troublesome situations is critical to the success of our system. We covered the key terms of performance and how they related to the CMS, explored WCAT and the establishment of baseline metrics, cleaned the HTTP responses delivered to the client, and looked at ways that Visual Studio 2010 improves the debugging experience to ease the identification and resolution of problems when they do occur. As we move into the final chapter, we'll look beyond metrics and into the realm of search engine optimization.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.169.94