Chapter 12. Performance

In the previous chapter, we covered the most important security issues related to the Top 10 OWASP initiative, whose goal is, in their own words "to raise awareness about application security by identifying some of the most critical risks facing organizations".

In this chapter, we're going to review the most common issues that a developer encounters in relation to an application's performance, and we'll also look at which techniques and tips are commonly suggested in order to obtain flexible, responsive, and well-behaved software, with a special emphasis on web performance. We will cover the following topics:

  • Reviewing the concepts behind performance (Application Performance Engineering)
  • We'll look at some of the most interesting tools that we have available in Visual Studio to measure and tune performance, including IntelliTrace and new options, such as PerfTips and Diagnostic Tools
  • We will also check some of the most useful possibilities available in popular modern browsers in the Developer's Tools menu (F12)
  • We'll comment on the most accepted well-known practices for performance and some of the software tools to check bottlenecks
  • Finally, we'll look at the most common problems in a web application's performance, focusing on the ASP.NET optimization

Application Performance Engineering

According to Jim Metzler and Steve Taylor, Application Performance Engineering (APE) covers the roles, skills, activities, practices, tools and deliverables applied at every phase of the application life cycle that ensure that an application will be designed, implemented and operationally supported to meet the non-functional performance requirements.

The keyword in the definition is non-functional. It is assumed that the application works, but some aspects, such as the time taken to perform a transaction or a file upload, should be considered from the very beginning of the life cycle.

So, the problem can, in turn, be divided into several parts:

  • On the one hand, we have to identify which aspects of the application might produce meaningful bottlenecks.
  • This implies testing the application, and tests can vary depending on the type of application, of course: for example line of business, games, web applications, desktop, and so on. These should lead us to state the application's performance goals in relation to the final production environment.
  • The development team should be able to handle performance problems that can be solved (or ameliorated) using a proven software technique: turning intermediate code into native code, assembly restructuring, optimizing garbage collector, serializing messages for scalability, asynchronous requests, threads of execution, parallel programming, and so on.
  • Another aspect is performance metrics. These metrics should be measurable using some performance testing in order to have real insight about the performance goal.

There are many possible performance metrics that we could consider: physical/virtual memory usage, CPU utilization, network and disk operations, database access, execution time, start up time, and so on.

Each type of application will suggest a distinct set of targets to care about. Also, remember that performance tests should not be carried out until all integration tests are completed.

Finally, let's say that usually, some tests are considered standard when measuring performance:

  • Load testing: This is intended to test software under heavy loads, such as when you test a website simulating lots of users to determine at what point the application's response time degrades or even fails.
  • Stress testing: This is one of the tests that your application should pass if it wants to obtain the official "Made for Windows X.x" logo. It's based on putting the system to work beyond its specifications to check where (and how) it fails. It might be by using heavy load (beyond the storage capacity, for example), very complex database queries, or continuous data input into the system or in database loading, and so on.
  • Capacity testing: MSDN Patterns and Practices also include this type of test, which is complementary to load testing, in order to determine the server's ultimate failure points, while the load testing checks the result at distinct levels of load and traffic.

In these types of tests, it's important to clearly determine what loads to target and to also create a contingency plan for special situations (this is more usual in websites, when, for some reason, a peak in users per second is expected).

The tools

Fortunately, we can count on of an entire set of tools in the IDE to carry out these tasks in many ways. As we saw in the first chapter, some of them are available directly when we launch an application in Visual Studio 2015 (all versions, including the Community Edition).

Refer to the A quick tip on execution and memory analysis of an assembly in Visual Studio 2015 section in Chapter 1, Inside the CLR, of this book for more details about these tools, including the Diagnostic Tools launched by default after any application's execution, showing Events, CPU Usage, and Memory Usage.

As a reminder, the next screenshot shows the execution of a simple application and the predefined analysis that Diagnostic Tools show at runtime:

The tools

However, keep in mind that some other tools might be useful as well, such as Fiddler, the traffic sniffer that plays an excellent role when analyzing web performance and request/response packets' contents.

Other tools are programmable, such as the StopWatch class, which allows us to measure the time that a block of code takes to execute with precision, and we also have Performance Counters, available in .NET since the first versions and Event Tracing for Windows (ETW).

Even in the system itself, we can find useful elements, such as Event Log (for monitoring behavior—totally programmable in .NET), or external tools explicitly thought of for Windows, such as the suite SysInternals, which we have already mentioned in the first chapter. In this case, one of the most useful tools you'll find is PerfMon (Performance Monitor), although you may remember that we've mentioned FileMon and RegMon as well.

Advanced options in Visual Studio 2015

The IDE, however—especially the 2015 and 2017 versions—contains many more functionalities to check the execution and performance at runtime. Most of this functionality is available through the Debug menu options (some at runtime and others in the edition).

However, one of the most ready-to-use tools available in the editor is a new option called Performance Tips, which shows how much time a function took to complete and it's presented in the next piece of code.

Imagine that we have a simple method that reads file information from the disk and then selects those files whose names don't contain spaces. It could be something like this:

private static void ReadFiles(string path)
{
    DirectoryInfo di = new DirectoryInfo(path);
    var files = di.EnumerateFiles("*.jpg", 
        SearchOption.AllDirectories).ToArray<FileInfo>();
    var filesWoSpaces = RemoveInvalidNames(files);
    //var filesWoSpaces = RemoveInvalidNamesParallel(files);
    foreach (var item in filesWoSpaces)
    {
        Console.WriteLine(item.FullName);
    }
}

The RemoveInvalidNames method uses another simple CheckFile method. Its code is as follows:

private static bool CheckFile(string fileName)
{
    return (fileName.Contains(" ")) ? true : false;
}
private static List<FileInfo> RemoveInvalidNames(FileInfo[] files)
{
    var validNames = new List<FileInfo>();
    foreach (var item in files)
    {
        if (CheckFile(item.Name)==true) {
            validNames.Add(item);
        }
    }
    return validNames;
}

We could have inserted the CheckFile functionality inside RemoveInvalidNames, but applying the single responsibility principle has some advantages here, as we will see.

Since the selection of files will take some time, if we establish a breakpoint right before the foreach loop, we will be informed of the time in one of these tips:

Advanced options in Visual Studio 2015

Of course, the real value in these code fragments is that we can see the whole process and evaluate it. This is not only about the time it takes, but also about the behavior of the system. So, let's put another breakpoint at the end of the method and see what happens:

Advanced options in Visual Studio 2015

As you can see, the entire process took about 1.2 seconds. And the IDE reminds us that we can open Diagnostic Tools to check how this code behaved and have a detailed summary, as the next compound screenshot shows (note that you will see it in three different docked windows inside the tools):

Advanced options in Visual Studio 2015

In this manner, we don't need to explicitly create a StopWatch instance to measure how much the process delayed.

These Performance Tips report the time spent, indicating what is less than or equal to (<=) a certain amount. This means that they consider the overhead of the debugging process (symbol loading, and so on), excluding it from the measurement. Actually, the greatest accuracy is obtained on CLR v4.6 and Windows 10.

As for the CPU graph, it uses all the available cores, and when you find a spike it would be interesting to check, even if doesn't reach 100%, for different types of problems, which we will enumerate later (keep in mind that this feature is not available until debugging ends).

Advanced options in the Diagnostic Tools menu

Actually, we can trace sentences one by one and see exactly where most of the time is spent (and where we should revise our code in search for improvements).

If you reproduce this code on your machine, depending on the number of files read, you'll see that in the bottom window of the Diagnostic Tools menu, there is a list that shows every event generated and the time it took to be processed, as shown in the following screenshot:

Advanced options in the Diagnostic Tools menu

Thanks to IntelliTrace, you can exactly configure the way you want the debugger to behave in general or for a specific application. Just go to Tools | Options and select Intellitrace Events (it has a separate entry in the tree view).

This allows the developer to select the types of events they're interested in. For instance, if we want to monitor the Console events, we can select which are the ones we need to target in our application:

Advanced options in the Diagnostic Tools menu

To test this, I coded a very simple Console application to show a couple of values and the number of rows and columns available:

Console.WriteLine("Largest number of Window Rows: " + Console.LargestWindowHeight);
Console.WriteLine("Largest number of Window Columns: " + Console.LargestWindowWidth);
Console.Read();

Once IntelliTrace is configured to show the activities of this application, named ConsoleApplication1, we can follow all its events in Event Window and later select an event of our interest to us and check Activate Historical Debugging in it:

Advanced options in the Diagnostic Tools menu

Once we do that, the IDE relaunches the execution, and, now, the Autos, Locals, and Watch windows appear again but show the values that the application managed at that precise time during the execution.

In practice, it's like recording every step given by the application at runtime, including the values of any variable, object, or component that we had previously selected as a target during the process (refer to the next screenshot):

Advanced options in the Diagnostic Tools menu

Also, note that the information provided also includes an exact indication of the time spent by every event at runtime.

Moreover, other profiles for different aspects of our application are possible. We can configure them in the Debugger menu under the Start Diagnostic Tools Without Debugging option.

Tip

When using Start Diagnostic Tools Without Debugging, the IDE will remind us to change the default configuration to Release if we want to obtain accurate results.

Observe that profiles can be attached to distinct applications in the system, not just the one we're building. A new configuration page opens, and the Analysis Target option shows distinct types of applications, as you can see in the next screenshot.

It could be the current application (ConsoleApplication1), a Windows Store App (either running or already installed), browsing to a web page on a Windows phone, select any other executable, or launch an ASP.NET application running on IIS:

Advanced options in the Diagnostic Tools menu

And this is not all in relation to performance and IntelliTrace. If you select the Show All Tools link, more options are presented, which relate to distinct types of applications and technologies to be measured.

In this way, in the Not Applicable Tools link, we see other interesting features, such as the following:

  • Application timeline: To check in which areas more time is spent in the application execution (such as the typical low frame rate).
  • HTML UI Responsiveness: Especially useful when you have an application that mixes the server and client code, and some actions in the client take too much time (think of frameworks such as Angular, Ext, React, Ember, and so on).
  • Network: A very useful complement to the previous web scenario, where the problem resides in the network itself. You can check response headers, timelines for every request, cookies, and much more.
  • Energy consumption: This makes sense especially in mobile applications.
  • JavaScript memory: Again, very useful when dealing with web apps that use external frameworks in which we don't know exactly where the potential memory leaks are.

The next screenshot shows these options:

Advanced options in the Diagnostic Tools menu

As you can see, these options appear as Not Applicable since they don't make sense in a Console app.

Once we launch the profile in the Start button, an assistant starts and we have to select the type of target: CPU Sampling, Instrumentation (to measure function calls), .NET Memory Allocation, and Resource Contention Data (concurrency), which can detect threads waiting for other threads.

In the assistant's last screen, we have a checkbox that indicates whether we want to launch the profiling immediately afterwards. The application will be launched and, when the execution is over, a profiling report is generated and presented in a new window:

Advanced options in the Diagnostic Tools menu

We have several views available: Summary, Marks (which presents all related timing at the execution), and Processes (obviously, showing information about any process involved in the execution).

This latest option is especially interesting in the results we obtain. Using the same ConsoleApplication1 file, I'm going to add a new method that creates a Task object and sleeps execution until 1500 ms:

private static void RunANewTask()
{
    Task task = Task.Run(() =>
    {
        Console.WriteLine("Task started at: " + 
            DateTime.Now.ToLongTimeString());
        Thread.Sleep(1500);
        Console.WriteLine("Task ended at: " + 
            DateTime.Now.ToLongTimeString());
    });
    Console.WriteLine("Task finished: " + task.IsCompleted);
    task.Wait();  // Blocked until the task finishes
}

If we activate this option of processes in the profiler, we're shown a bunch of options to analyze, and the report generated holds information to filter data in distinct ways depending on what we need: Time Call Tree, Hot Lines, Report Comparison (with exports), Filters, and even more.

For example, we can view the Call Stack at the time the view was collected by double-clicking on an event inside the Diagnostic Tools menu:

Advanced options in the Diagnostic Tools menu

Note how we have presented information related to Most Contended Resources and Most Contended Threads, with a breakdown of each element monitored: either handles or thread numbers. This is one of the features that, although available in previous versions of Visual Studio, should be managed via Performance Counters, as you can read in Maxim Goldin's article Thread Performance - Resource Contention Concurrency Profiling in Visual Studio 2010, available as part of MSDN Magazine at https://msdn.microsoft.com/en-us/magazine/ff714587.aspx.

Besides the information shown in the screenshot, a lot of other views give us more data about the execution: Modules, Threads, Resources, Marks, Processes, Function Details, and so on.

The next capture shows what you will see if you follow these steps:

Advanced options in the Diagnostic Tools menu

To summarize, you just learned how the IDE provides a wide set of modern, updated tools, and it's just a matter of deciding which one is the best solution for the analysis required.

Other tools

As we saw in the previous chapter, modern browsers offer new and exciting possibilities to analyze web page behavior in distinct ways.

Since it is assumed that the initial landing time is crucial in the user's perception, some of these features relate directly to performance (analyzing content, summarizing request time for every resource, presenting graphical information to catch potential problems with a glimpse, and so on).

The Network tab, usually present in most of the browsers, shows a detailed report of loading times for every element in the current page. In some cases, this report is accompanied by a graphical chart, indicating which elements took more time to complete.

In some cases, the names might vary slightly, but the functionality is similar. For instance, in Edge, you have a Performance tab, which records activity and generates detailed reports, including graphical information.

In Chrome, we find its Timeline tab, a recording of the page performance, which also presents a summary of the results.

Finally, in Firefox, we have an excellent set of tools to check the performance, starting with the Net tab, which analyzes the download time for every request and even presents a detailed summary when we pass the cursor over each element in the list, allowing us to filter these requests by categories: HTML, CSS, JS, images, plugins, and so on, as shown in the following screenshot:

Other tools

Also, in Chrome, we find another interesting tab: Audits. The purpose is to monitor distinct aspects of page behaviors, such as the correct usage (and the impact) of CSS, combining JavaScript files to improve the overall performance (the operation called Bundling and Minifying), and, in general, a complete list of issues that Chrome considers improvable, mainly in two aspects: Network Utilization and Web Page Performance. The next screenshot shows the final report on a simple page:

Other tools

To end this review of performance features linked to browsers, also consider that in some browsers, we find a Performance tab, specifically included to load response times or similar utilities, such as PageInsights in the case of Chrome and a similar one in Firefox (I would especially recommend Firefox Developer Edition for its highly useful features for a developer).

In this case, you can record a session in which Firefox gets all the required information to give a view of the performance, which you can later analyze in many forms:

Other tools

Note that performance is mainly focused on JavaScript usage, but it is highly customizable for other aspects of a page's behavior.

The process of performance tuning

Just like with any other software process, we can conceive performance-tuning as a cycle. During this cycle, we try to identify and get rid of any slow feature or bottleneck, up to the point at which the performance objective is reached.

The process goes through data collection (using the tools we've seen), analyzing the results, and changes in configuration, or sometimes in code, depending on the solution required.

After each cycle of changes is completed, you should retest and measure the code again in order to check whether the goal has been reached and your application has moved closer to its performance objectives. Microsoft's MSDN suggests a cycle process that we can extrapolate for several distinct scenarios or types of applications.

Keep in mind that software tuning often implies tuning the OS as well. You should not change the system's configuration in order to make a particular application perform correctly. Instead, try to recreate the final environment and the possible (or predictable) ways in which that environment is going to evolve.

Only when you are absolutely sure that your code is the best possible should you suggest changes in the system (memory increase, better CPUs, graphic cards, and so on).

The following graphic, taken from the official MSDN documentation, highlights this performance cycle:

The process of performance tuning

Performance Counters

As you probably know, the operating system uses Performance Counters (a feature installed by default), to check its performance and eventually notify the user about performance limitations or poor behavior.

Although they're still available, the new tools that we've seen in the IDE provide a much better and integrated method to check and analyze the application's performance.

Bottleneck detection

The official documentation in MSDN gives us some clues that we can keep in mind in the process of bottleneck detection and divides the possible origins mainly into four categories (each one proposing a distinct management): CPU, memory, disk I/O, and network I/O.

For .NET applications, some recommendations are assumed correctly when identifying the possible bottlenecks:

  • CPU: As for the CPU, check Diagnostic Tools in search of pikes. If you find one, narrow the search to identify the cause and analyze the code. A pike is considered harmful if it increases beyond 75% of the CPU usage for more than a certain amount of time.
    • The consequence, in this case, might well be associated with the code. Generally speaking, asynchronous processes, tasks, or parallel programming are recognized to have a positive impact on solving these kind of problems.
  • Memory: Here, a memory peak can have several reasons. It may be our code, but it is also a process that implies the extensive use of memory (physical or virtual).
    • Possible causes are unnecessary allocations, nonefficient clean-up or garbage collection, lack of a caching system, and others. When virtual memory is used, the results may get worse immediately.
  • Disk I/O: This refers to the number of operations (read/write) performed, either on the local storage system or in the network the application has access to.
    • There are multiple causes that can provoke a bottleneck here: reading or writing to long files, accessing a network that is overused or not optimally configured, operations that imply ciphering data, unnecessary reads from databases, or an excess of paging activity.
    • To solve these kind of problems, MSDN recommends the following:
    • Start by removing any redundant disk I/O operations in your application.
    • Identify whether your system has a shortage of physical memory, and, if so, add more memory to avoid excessive paging.
    • Identify whether you need to separate your data onto multiple disks.
    • Consider upgrading to faster disks if you still have disk I/O bottlenecks after doing all of the preceding options.
  • Network I/O: This is about the amount of information sent/received by your server. It could be an excessive number of remote calls or the amount of data routed through a single network interface card (NIC traffic), or it might have to do with large chunks of data sent or received in a large number of calls.

Every possible bottleneck might have a distinct root cause, and we should carefully analyze the possible origins based on questions such as these: is it because of my code or is it the hardware? If it is a hardware problem, is there a way to accelerate the process implied through software improvements? And so on.

Bottleneck detection in practice

At the time of determining bottlenecks in .NET, you can still use (besides all those tools we've already seen) Performance Counters, although the previous techniques we've seen are supposed to ease the detection process considerably.

However, the official recommendations linked to some of the issue detections are still a valuable clue. So, the key here would be to look for the equivalent.

There are several types depending on the feature to be measured, as MSDN suggests:

  • Excessive memory consumption: Since the cause is usually wrong memory management, we should look for values on the following:
    • Process/private bytes
    • .NET CLR memory/# bytes in all heaps
    • Process/working set
    • .NET CLR memory/large object heap size

    The key with these counters is, if you find out an increase in private bytes while the # of bytes in all heap counters remains the same, that means there is some kind of unmanaged memory consumption. If you observe an increase in both counters, then the problem is in the managed memory consumption.

  • Large working set size: We should understand working set means all memory pages loaded in RAM at a given time. The way to measure this problem is to use processworking set Performance Counter. Now we have other features, but the points to look for are the same, basically:
    • If you get a high value, it might mean that the number of assemblies loaded is very high as well. There's no specific threshold to watch in this counter; however, a high or frequently changing value could be the key to a memory shortage.
    • If you see a high rate of page faults, it probably means that your server should have more memory.
  • Fragmented large object heap: In this case, we have to care about objects allocated in large object heap (LOH). Generally, objects greater than 85 KB are allocated there, and it was traditionally detected using the .NET CLR memorylarge object heap size profiler, and now, using the memory diagnostic tools that we've already seen.
    • They might be buffers (for large strings, byte arrays, and so on) that are common in I/O operations (such as in BinaryReaders).
    • These allocations fragment the LOH considerably. So, recycling these buffers is a good practice to avoid fragmentation.
  • High CPU utilization: This is normally caused by managed code that is not optimally written, as happens when the code does the following:
    • Forces an excessive use of GC. The measure of this feature was previously done using %Time in GC, counter.
    • Also, when the code provokes many exceptions, you can test that with .NET CLR exceptions# of exceptions thrown/sec.
    • A large number of threads is generated. This might cause the CPU to spend a lot of time switching between threads (instead of performing real work). Previously measured using the ThreadContext Switches/sec, now we can check it with the previously seen Analysis Target feature.
  • Thread contention: This happens when multiple threads try to access a shared resource (remember, a process creates an area of shared resources that all threads associated with it can access).

    The identification of this symptom is usually done by observing two performance counters:

    • .NET CLR LocksAndThreadsContention Rate/sec
    • .NET CLR LocksAndThreadsTotal # of Contentions

Your application is said to have a contention rate issue or one that encounters thread contention when there is a meaningful increase in these two values. The responsible code should be identified and rewritten.

Using code to evaluate performance

As mentioned earlier, besides the set of tools we've seen, it is possible to combine these techniques with software tools especially designed to facilitate our own performance measures.

The first and best known is the Stopwatch class, which belongs to the System.Diagnostics namespace, which we've already used in the first chapters to measure sorting algorithms, for example.

The first thing to remember is that depending on the system, the Stopwatch class will offer different values. These values can be queried at first if we want to know how far we can get accurate measurements. Actually, this class holds two important properties: Frequency and IsHighResolution. Both properties are read-only.

Additionally, some methods complete a nice set of functionalities. Let's review what they mean:

  • Frequency: This gets the frequency of the timer as a number of ticks per second. The higher the number, the more precise our Stopwatch class can behave.
  • IsHighResolution: This indicates whether the timer is based on a high-resolution performance counter.
  • Elapsed: This gets the total elapsed time that is measured.
  • ElapsedMilliseconds: This is the same as Elapsed, but it is measured in milliseconds.
  • ElapsedTicks: This is the same as Elapsed, but it is measured in ticks.
  • IsRunning: This is a Boolean value that indicates whether Stopwatch is still in operation.

The Stopwatch class also has some convenient methods to facilitate these tasks: Reset, Restart, Start, and Stop, whose functionality you can easily infer by their names.

So let's use our reading file method from the previous and present tests, together with a Stopwatch to check these features with some basic code:

var resolution = Stopwatch.IsHighResolution;
var frequency = Stopwatch.Frequency;
Console.WriteLine("Stopwatch initial use showing basic properties");
Console.WriteLine("----------------------------------------------");
Console.WriteLine("High resolution: " + resolution);
Console.WriteLine("Frequency: " + frequency);
Stopwatch timer = new Stopwatch();
timer.Start();
ReadFiles(pathImages);
timer.Stop();
Console.WriteLine("Elapsed time: " + timer.Elapsed);

Using this basic approach, we have a simple indication of the total time elapsed in the process, as shown in the next screenshot:

Using code to evaluate performance

We can get more precision using the other properties provided by the class. For example, we can measure the basic unit of time Stopwatch uses in attempting to get the nanosecond thanks to the Frequency property.

Besides, the class also has a static StartNew() method, which we can use for simple cases like these; so, we can change the preceding code in this manner:

static void Main(string[] args)
{
    //BasicMeasure();
    for (int i = 1; i < 9; i++)
    {
        PreciseMeasure(i);
        Console.WriteLine(Environment.NewLine);
    }
    Console.ReadLine();
}
private static void PreciseMeasure(int step)
{
    Console.WriteLine("Stopwatch precise measuring (Step " + step +")");
    Console.WriteLine("------------------------------------");
    Int64 nanoSecPerTick = (1000L * 1000L * 1000L) / Stopwatch.Frequency;
    Stopwatch timer = Stopwatch.StartNew();
    ReadFiles(pathImages);
    timer.Stop();
    var milliSec = timer.ElapsedMilliseconds;
    var nanoSec = timer.ElapsedTicks / nanoSecPerTick;
    Console.WriteLine("Elapsed time (standard): " + timer.Elapsed);
    Console.WriteLine("Elapsed time (millisenconds): " + milliSec + "ms");
    Console.WriteLine("Elapsed time (nanoseconds): " + nanoSec + "ns");
}

As you can see, we use a small loop to perform the measure three times. So, we can compare results and have a more accurate measure, calculating the average.

Also, we're using the static StartNew method of the class since it's valid for this test (think of some cases in which you might need several instances of the Stopwatch class to measure distinct aspects or blocks of the application, for instance).

Of course, the results won't be exactly the same in every step of the loop, as we see in the next screenshot showing the output of the program (keep in mind that depending on the task and the machine, these values will vary considerably):

Using code to evaluate performance

Also, note that due to the system's caching and allocation of resources, every new loop seems to take less time than the previous one. This is the case in my machine depending on the distinct system's state. If you need close evaluations, it is recommended that you execute these tests at least 15 or 20 times and calculate the average.

Optimizing web applications

Optimizing web applications is, for many specialists, a sort of a black art compound of so many features, that actually, there are a lot of books published on the subject.

We will focus on .NET, and, therefore, on ASP.NET applications, although some of the recommendations are extensible to any web application no matter how it is built.

Many studies have been carried on the reasons that move a user to uninstall an application or avoid using it. Four factors have been identified:

  • The application (or website) freezes
  • The application crashes
  • Slow responsiveness
  • Heavy battery usage (for mobiles and tablets, obviously)

So, battery considerations apart, the application should be fast, fluid and efficient. But what do these keywords really mean for us?

  • Fast means that going from a point A to a point B should always be done in minimal time: starting from application launching and going through navigation between pages, orientation changes, and so on.
  • Fluid has to do with smooth interactions. Panning pages, soft animations intended to indicate changes in the state or information presented, the elimination of glitches, image flickering, and so on.
  • An application or website is considered efficient when the use of resources is adequate: disk resources, memory footprint, battery life, bandwidth, and so on.

In any case, the overall performance is usually linked to the following areas:

  • Hosting environment (IIS, usually)
  • The ASP.NET environment
  • The application's code
  • The client side

So, let's quickly review some aspects to keep in mind at the time of optimizing these factors, along with some other tips generally accepted as useful when improving the page's performance.

IIS optimization

There are a few techniques that are widely recognized to be useful when optimizing IIS, so I'm going to summarize some of these tips offered by Brian Posey in Top Ten Ways To Pump Up IIS Performance (https://technet.microsoft.com/es-es/magazine/2005.11.pumpupperformance.aspx) in a Microsoft TechNet article:

  • Make sure HTTP Keep-Alives are enabled: This holds the connection open until all files' requests are finished, avoiding unnecessary opening and closing. This feature is enabled by default since IIS6, but it's wise to check just in case.
  • Tune connection timeouts: This means that after a period of inactivity, IIS will close the connection anyway. Make sure the timeout configured is enough for your site.
  • Enable HTTP compression: This is especially useful for static content. But beware of compressing dynamic pages: IIS should compress them each time for every request. If you have heavy traffic, the consequence is a lot of extra work.
  • Consider web gardens: You can assign multiple worker processes to your application's pool using a web garden. If one of these processes hangs, the rest can keep attending requests.
  • Object cache TTL (Time to Live): IIS caches requested objects and assigns a TTL to everyone (so they're removed afterwards). However, note that if this time is not enough, you should edit the registry and be very careful with it (the earlier mentioned article explains how to do this).
  • Recycle: You can avoid memory leaks in the server by recycling memory. You can specify that IIS recycles the application pool at set intervals (every 3 hours or whatever is fine for you) at a specific time each day or else when you consider that the application pool has received a sufficient number of requests. The <recycle> element in web.config allows you to tune this behavior.
  • Limit Queue Length: Just in case you detect an excess in the requests on your server, it might be useful to limit the number of requests that IIS is allowed to serve.

ASP.NET optimization

There are many tips to optimize ASP.NET in the recent versions that correspond to bug fixes, improvements, and suggestions made to the development team by developers all over, and you'll find abundant literature on the Web about it. For instance, Brij Bhushan Mishra wrote an interesting article on this subject (refer to http://www.infragistics.com/community/blogs/devtoolsguy/archive/2015/08/07/12-tips-to-increase-the-performance-of-asp-net-application-drastically-part-1.aspx), recommending some not-so-well-known aspects of the ASP.NET engine.

Generally speaking, we can divide optimization into several areas: general and configuration, caching, load balancing, data access, and client side.

General and configuration

Some general and configuration rules apply at the time of dealing with optimization of ASP.NET applications. Let's see some of them:

  • Always remember to measure your performance issues in the Release mode. The difference might be noticeable and hides performance issues.
  • Remember to use the profiling tools we've seen and compare the same sites using these tools and different browsers (sometimes, a specific feature can be affected in one browser but not so much in others).
  • Revise unused modules in the pipeline: even if they're not used, requests will have to pass through all modules predefined for your application's pool. However, how do I know which modules are active?
    • There's an easy way to code this. We can use the application instance and recover the collection of modules loaded in a variable, as you can see in the following code. Later on, just mark a breakpoint to see the results:
      HttpApplication httpApps = HttpContext.ApplicationInstance;
      //Loads a list with active modules in the ViewBag
      HttpModuleCollection httpModuleCollections = httpApps.Modules;
      ViewBag.modules = httpModuleCollections;
      ViewBag.NumberOfLoadedModules = httpModuleCollections.Count;

      You should see something like the following screenshot to help you decide which is in use and which is not:

      General and configuration
    • Once you see all the modules in action, if your website requires no authentication, you can get rid of these modules, indicating that in the Web.config file:
      <system.webServer>
        <modules>
          <removename="FormsAuthentication" />
          <removename="DefaultAuthentication" />
          <removename="AnonymousIdentification" />
          <removename="RoleManager" />
        </modules>
      </system.webServer>
    • In this way, we will use only those modules that our application requires, and that happens with every request the application makes.
  • The configuration of Pipeline Mode: Starting from IIS7, there are two pipeline modes available: Integrated and Classic. However, the latter is only for compatibility purposes with versions migrated from IIS 6. If your application doesn't have to cope with compatibility issues, make sure Integrated is active in the Edit Application Pool option of IIS Management.
  • A good idea is to flush your HTML as soon as it is generated (in your web.config) and disable ViewState if you are not using it: <pages buffer="true" enableViewState="false">.
  • Another option to optimize ASP.NET application's performance is to remove unused View Engines. By default, the engine searches for views in different formats and different extensions:
    • If you're using only Razor and C#, it doesn't make sense to have activated options that you'll never use. So, an option is to disable all engines at the beginning and only enable Razor. Just add the following code to the application_start event:
      // Removes view engines
      ViewEngines.Engines.Clear();
      //Add Razor Engine
      ViewEngines.Engines.Add(newRazorViewEngine());
    • Another configuration option to keep in mind is the feature called runAllManagedModulesForAllRequests, which we can find in Web.config or applicationHost.config files. It's similar to the previous one in a way since it forces the ASP.NET engine to run for every request, including those that are not necessary, such as CSS, image files, JavaScript files, and so on.
    • To configure this without interfering with other applications that might need it, we can use a local directory version of Web.config, where these resources are located, and indicate it in the same modules section that we used earlier, assigning this attribute value to false:
      <modulesrunAllManagedModulesForAllRequests="false">
  • Use Gzip to make sure the content is compressed. In your Web.config, you can add the following:
    <urlCompression doDynamicCompression="true" doStaticCompression="true" dynamicCompressionBeforeCache="true"/>

Caching

First of all, you should consider the Kernel Mode Cache. It's an optional feature that might not be activated by default.

  • Requests go through several layers in the pipeline and caching can be done at different levels as well. Refer to the next figure:
    Caching
    • We can go to Cache Configuration in IIS Administration Tools and add a new configuration, enabling the Kernel Model Caching checkbox.
  • In relation to this, you also have the choice of using client caching. If you add a definition in a folder that holds static content, most of the time, you'll improve the web performance:
    <system.webServer>
      <staticContent>
        <clientCachecacheControlMode="UseMaxAge"cacheControlMaxAge="1.00:00:00" />
      </staticContent>
    </system.webServer>
  • Another option is to use the <OutputCache> attribute linked to an action method. In this case, caching can be more granular using only information linked to a given function.
    • It's easy to indicate this:
      [OutputCache(Duration=10, VaryByParam="none")]
      public ActionResult Index()
      {
        return View();
      }
    • Just remember that most of the properties of this attribute are compatible with the <OutputCache> directive, except VaryByControl.
  • Besides cookies, you can use the new JavaScript 5 API's localStorage and sessionStorage attribute, which offer the same functionality but with a number of advantages in security and very fast access:
    • All data stored using sessionStorage is automatically erased from the local browser's cache when you abandon the website, while the localStorage values are permanent.

Data access

We've already mentioned some techniques for faster data access in this book, but in general, just remember that good practices almost always have a positive impact on access, such as some of the patterns we've seen in Chapter 10, Design Patterns. Also, consider using repository patterns.

Another good idea is the use of AsQueryable, which only creates a query that can be changed later on using Where clauses.

Load balancing

Besides what we can obtain using web gardens and web farms, asynchronous controllers are recommended by MSDN all over the documentation, whenever an action depends on external resources.

Using the async/await structure that we've seen, we create non-blocking code that is always more responsive. Your code should then look like the sample provided by the ASP.NET site (http://www.asp.net/mvc/overview/performance/using-asynchronous-methods-in-aspnet-mvc-4):

public async Task<ActionResult>GizmosAsync()
{
  var gizmoService = newGizmoService();
  returnView("Gizmos", await gizmoService.GetGizmosAsync());
}

As you can see, the big difference is that the Action method returns Task<ActionResult> instead of ActionResult itself. I recommend that you read the previously mentioned article for more details.

Client side

Optimization in the client side can be a huge topic, and you'll find hundreds of references on the Internet. The following are some of the most used and accepted practices:

  • Use the optimization techniques that we've seen included in modern browsers in order to determine possible bottlenecks.
  • Use the Single Page Application architecture based on AJAX queries to partially refresh your pages' contents.
  • Use CDNs for scripts and media content. This improves the loading time on the client side since these sites are already highly optimized.
  • Use bundling and minification techniques. If your application is built using ASP.NET 4.5 or higher, this technique is enabled by default. These two techniques improve the request load time by reducing the number of requests to the server and reducing the size of the requested resources (such as CSS and JavaScript).
    • This technique has to do with the functionality of modern browsers, which usually limit the number of simultaneous requests to six per hostname. So, every additional request is queued by the browser.
    • In this case, check the loading time, using what we saw in the browser tools to get detailed information about every request.
    • Bundling allows you to combine or bundle multiple files into a single file. This can be done for certain types of assets for which merging content does not provoke malfunctioning.
    • You can create CSS, JavaScript, and other bundles because fewer files mean fewer requests and that improves the first-page load performance.
    • The official documentation of ASP.NET shows the following comparative table of results with and without this technique and the percentage of change obtained (refer to http://www.asp.net/mvc/overview/performance/bundling-and-minification for the complete explanation):
       

      Using B/M

      Without B/M

      Change

      File requests

      9

      34

      256%

      KB sent

      3.26

      11.92

      266%

      KB received

      388.51

      530

      36%

      Load time

      510 MS

      780 MS

      53%

As the documentation explains: The bytes sent had a significant reduction with bundling as browsers are fairly verbose with the HTTP headers they apply on requests. The received reduction in bytes is not as large because the largest files (Scriptsjquery-ui-1.8.11.min.js and Scriptsjquery-1.7.1.min.js) are already minified. Note that the timings on the sample program used the Fiddler tool to simulate a slow network. (From the Fiddler Rules menu, select Performance and then select Simulate Modem Speeds.)

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.151.164