This chapter covers
According to the Performance Golden Rule, 80 to 90% of end-user response time is spent on the front end, and so far we’ve been focusing our efforts on optimizing front-end code. The Surf Store application had a very poor end-user response time originally, but you’ve optimized it over the course of this book until you cut its response time in half! But, what happens when your application continues to run slowly despite having a highly optimized front end? Sometimes you can’t avoid the fact that something in the back-end code is affecting your application’s performance.
In this chapter, you’re going to shift your focus from front end-specific techniques to ASP.NET MVC-specific techniques. (Chapter 11 will focus on ASP.NET Web Forms performance.) We’ll use a little fine-tuning to squeeze precious milliseconds out of your ASP.NET MVC application and we’ll begin to scrutinize the framework more closely. When you create a new ASP.NET MVC project in Visual Studio, it’s filled with loads of useful coding helpers that make your life as a developer easier. You’re about to learn tips and tricks you can apply to your ASP.NET MVC application that will help it run more efficiently. Aside from improved page load times, you’ll also reduce the memory footprint on your servers, which is exactly what you need if your site experiences a high level of traffic.
You’ll also use a web page profiler to help you identify bottlenecks and areas for improvement. This profiling tool is different from the tools we’ve used so far because it will integrate with the back-end code and pinpoint the exact pieces of code that may be causing bottlenecks in your application. The tool is open source and easy to use. You’ll be set up and profiling in about 5 minutes!
By default, ASP.NET MVC ships with two view engines: the Web Forms view engine and the Razor view engine. When you create a new MVC project, you’re given the choice of view engines (figure 10.1).
By default, both engines are included in the application startup. ASP.NET MVC resolves named views by searching first for files that match the Web Forms view engine’s naming conventions. If your MVC application can’t find a view, the error message in figure 10.2 might be a familiar sight.
Each time MVC looks for a view, it searches all those locations until it finds what it’s looking for. In figure 10.2 you’ll see that the Razor view is the fifth view to be searched for after ASP.NET MVC fails to find the first few views. Extra lookups take time, but if you remove all other view engines that are available at application startup, and are left with only the view engine that you need, MVC doesn’t have to waste time searching all the other locations. If you intend to use only one type of view engine throughout your code, make sure there’s only one view engine available for the application to search. Fortunately, ASP.NET MVC is configurable and lets you update your view engines.
In your application, begin by opening up the Global.asax file.
The code starts by clearing out all the available default engines. Next, add the view engine you’re using in the application. In this case it’s Razor, but it could easily be the Web Forms view engine or a custom view engine.
By making this small change to your application, you’ve reduced the number of locations the MVC routing system needs to check before it finds a match and you’ve reduced the time it takes for a view to be returned to the user.
If you’re about to deploy your website to a production environment, one of the most important things you can do is to make sure it’s been compiled and deployed in Release mode. You may be familiar with the compilation configuration drop-down in Visual Studio, shown in figure 10.3.
As the name states, Debug mode is meant for debugging and is meant to make your life as a developer a lot easier. Visual Studio’s built-in debugger allows you to pause and step through code so you can debug and visualize the values in your application. Unfortunately, Debug mode isn’t ideal for code performance. When your code is running in Debug mode, a number of nonoptimal things are happening:
In Debug mode, ASP.NET MVC has optimized the view resolution to simplify development. MVC iterates through the views and attempts to resolve them every time your code renders a view. This makes your life as a developer a lot easier because the development environment responds immediately to any changes you’ve made. But in Release mode, everything is optimized for performance. MVC resolves a view more efficiently because it caches the result of the lookup. A new view is cached automatically when it’s resolved in Release mode, dramatically speeding up the response time since the code doesn’t need to perform another disk read.
If you’d like to know whether your application is running in Debug mode, check the Web.config file. You might notice something similar to the next listing.
It isn’t ideal to run your application in Debug mode in a production environment. The preferred way to switch to Release mode is to deploy your application with Visual Studio and publish it in Release mode. You could also remove debug=true from your Web.config file so the application pool in IIS will recycle and the application will then run in Release mode.
This setting is highly important. If you’re running your application in Debug mode in a production environment, it’s almost definitely running slower than necessary, so make sure you publish your application in Release mode.
You may be familiar with the 16 x 16 image that appears on the address bar when you browse a website. This tiny image, known as a favicon, is often used to display the logo of an organization or an image you would associate with the brand you’re viewing. Figure 10.4 shows an example of two favicons in the Google Chrome browser.
By default, most modern browsers look for a favicon, so turning it off isn’t an option. The browsers look in the root of the application for the path /favicon.ico and if they don’t find it, they return a 404 error. Fortunately, these 404 errors occur silently on your server and aren’t shown to your users. You might only pick up the error with the correct error logging tools on the server. How does this affect your application? The 404 error page is larger than a favicon, so the client spends time downloading an error page which is larger and takes more time to download than the favicon itself. Forgetting to add a favicon might cause extra disk I/O and computation, all of which adversely affect response times and increase server load.
One of my most memorable stories about favicons is the story behind the success of Instagram. During the early stages of Instagram, the website started to receive up to 25,000 sign-ups a day! The back-end engineers noticed that the error logs on the server had a lot of 404 errors coming from a missing favicon. These 404 errors were causing unnecessary disk reads that were negatively impacting the server’s load. As soon as the favicon was added, the 404 errors stopped and a huge burden was instantly lifted off the server. Instagram has over 100 million registered users today. Imagine the impact a missing favicon would have on their servers now.
It’s best to include a favicon in the root of your MVC application, but if you choose to include the favicon in another location, you may notice an interesting error in your error logs.
The controller for path /favicon.ico does not implement IController.
Fortunately, this error will occur in the background and won’t be shown to the user. Because of the nature of MVC, instead of looking for a normal icon file in the root of your application, the ASP.NET MVC routing will be used. The favicon.ico is regarded as a path and the MVC routing will look for a controller called favicon.ico first. Now your application is taking a double performance hit. Instead of performing a simple I/O operation, the icon request is hitting code and making the application work harder than it needs to.
If you still choose not to include a favicon in the root of your application, the following listing contains code that tells ASP.NET MVC to ignore the route.
The code will ignore the route and save that extra bit of work being done on the server. Although you’ve made sure you aren’t producing tons of 404 errors on the server, and extra work isn’t being performed by the server in vain, this line of code doesn’t necessarily make your website faster. The code makes sure the server won’t waste its resources executing code and looking for a file unnecessarily, helping you keep your application efficient and running smoothly, but you may still want to consider adding a favicon to your application. This won’t give you an instant performance benefit, but instead a long-term performance gain that will ensure that high traffic won’t affect the overall performance of your application.
Unfortunately, these favicon problems extend to other types of icons. A wide range of mobile devices that browse your website will look for a new type of icon called a web clip icon. Mobile devices use web clip icons when a user wants to add your web application or web page link to their device’s Home screen. This problem is similar to the favicon issue because most mobile devices will look for an apple-touch-icon.png file in the root of your application. Its name would have you think only Apple devices look for these icons, but Android devices also look for web clip icons in the root of your application. If a web clip icon isn’t there, you’ll get a 404 error. The following HTML may be familiar to you:
<link rel="apple-touch-icon" href="apple-touch-icon.png">
Unfortunately, it gets worse. Some devices will look for different sizes of web clip icons as shown in the next listing.
<link rel="apple-touch-icon-precomposed" href="apple-touch-icon-precomposed.png"> <link rel="apple-touch-icon-precomposed" href="apple-touch-icon-72x72-precomposed.png"> <link rel="apple-touch-icon-precomposed" href="apple-touch-icon-114x114-precomposed.png"> <link rel="apple-touch-icon-precomposed" href="apple-touch-icon-144x144-precomposed.png">
Much like the favicon, you can include these web clip icons in your HTML to make sure devices know where to find them, but you could also ignore the web clip icons in your RouteConfig.cs file.
You could choose to include all these types of web clip icons in your application, or you could tell MVC it doesn’t need to look for them. Either way, it’s important to think about web clip icons because extra lookups might be causing extra disk I/O and computation, all of which adversely affect response times and increase server load.
In the first two parts of this book, you have taken an application from a dismally performing front end to a highly optimized one. You may get to a point in your application’s development where you’re happy with the front-end code’s performance, but for some reason the website is still running slowly and the pages are taking longer than they should to load.
This back-end load time may be evident in a waterfall chart of your web page. Figure 10.5 shows a waterfall chart for a web page with a bottleneck occurring in the back-end code.
Diagnosing the problem can be extremely frustrating and you may often find yourself looking in the wrong place. The only way to find the bottlenecks in your back-end code is to use a profiling tool. Many profiling tools are available, and here are a few of the most well-known:
One of my favorite ASP.NET MVC profiling tools is MiniProfiler. It’s a free download and allows you to profile your MVC application as well as any database queries, Entity Framework queries, Linq2SQL queries, and individual pieces of code.
As you’ve gone through each chapter, you’ve improved the Surf Store application’s performance step-by-step. When you look at the sample code, you’ll notice a folder for almost every chapter in this book. Each folder contains Before and After folders, as you can see in figure 10.6, repeated from chapter 3.
In this chapter, you’re going to set up the Surf Store application so we can profile the code with MiniProfiler. The sample code for this chapter has changed slightly because I’ve updated it to use a local database instead of retrieving the images from disk. As you’re focusing on the back-end code in these next chapters, I wanted to get as close to a real-world coding scenario as possible. To create a challenge for the profiler, I purposely injected code to make the website perform slowly when hitting the database.
It’s important to disable HTTP caching and output caching when profiling your website with MiniProfiler. You’re looking to identify code bottlenecks and not front-end performance bottlenecks, so it’s best to disable them. HTTP caching will only skew the results upon reloading the page. Refer to chapter 4 if you’d like to disable HTTP caching temporarily for your web application.
Let’s start profiling. First, we need to add the MiniProfiler library to the sample application by downloading the library from http://miniprofiler.com, or by using the NuGet package manager in Visual Studio 2012. If you aren’t familiar with NuGet, it’s a free open source package management system for the .NET platform. Instead of searching for an open source library on a website, NuGet contains a list of thousands of free libraries that easily integrate and download into your application. It’s very handy because you can quickly add a library to your application without having to visit multiple websites; the libraries are all in one place. Navigate to your Solution Explorer and right-click References. Figure 10.7 shows this in action.
Next, search for MiniProfiler in the search bar and the NuGet package manager will locate the MiniProfiler package for you. Click Install and the required dependencies will be added to your application. Figure 10.8 shows the NuGet package manager and the interface that allows you to easily locate and download the libraries you need to add to your application.
You’re almost ready to begin profiling, but first we need to set up a few things. Open the Layout Razor view and add a bit of code that will allow you to see your profiling results. Add the code in listing 10.6 immediately before the closing body tag.
The code has been shortened to keep it simple. You’re including code that will write out the CSS and JavaScript that will output the profiling results to the web page.
Next, you need to update the Global.asax file and initialize the MiniProfiler so it will start profiling when your application fires up. The next listing contains code you’ll need to add to the Global.asax file.
The code we’ve added only is executed if the application is running locally. It checks for the presence of a local HTTP request and if true, it starts MiniProfiler. This check is added for security reasons, in case you deployed your application to a production environment with the profiling code still enabled. You wouldn’t want anyone to see this sensitive information. It’s also worth mentioning that MiniProfiler is designed for production use. A line of code in the previous listing uses Request.IsLocal to see if the code is being run in a local environment. As an added security measure, that line of code could just as easily be User.IsAdmin || Request.IsLocal.
MiniProfiler allows you to choose the specific parts of your application that you’d like to profile. Instead of profiling the entire application, you can specify a piece of code to profile. It’s lightweight and you choose the areas you wish to focus on. The next snippet shows a profiling block wrapped around a piece of code.
var profiler = MiniProfiler.Current; using (profiler.Step("Important code")) { // Some important code goes here }
This code snippet contains a reference to the current instance of MiniProfiler. It is then used in a using statement to profile a section of the code. You can have as many of these profiling blocks as you wish, and you can even use them in downstream methods. It’s also important to label the profiling block so you know where to pinpoint your code during profiling. Without labels you can easily become lost in the profiling blocks.
A good page to profile in the Surf Store application is the Products page. It’s retrieving a list of products based on a category from the database. This bit of code might be inefficient and cause the page to load slowly. Start, as shown in the following listing, by implementing the profiling blocks around the code you wish to examine further.
The listing contains the Surf Store application’s code that reads a list of images from a database. This code might not be running as efficiently as it should and using MiniProfiler will give you a chance to identify any inefficient code. Notice that the code contains profiling blocks that are wrapped around methods and individual lines of code. These profiling blocks can be named, which makes it easier to identify the code at a later stage. MiniProfiler will automatically tell you how long it takes for actions to execute and views to render, and the profiling blocks are useful if you wish to investigate specific pieces of code manually.
That’s it. You’re ready to fire up the application and begin profiling. If you navigate to the Products page of the Surf Store application, you’ll notice profiling details in the top-left corner of the screen. Click the ms duration in the corner and you’ll be presented with profiling details similar to those in figure 10.9.
In figure 10.9, notice that the profiler also details the full-page lifecycle, so if any intensive JavaScript runs on the front end, you’ll be able to identify that, too. The results show the profiling blocks we added in listing 10.8, and you’ll notice that the Retrieve Products profiling block took seven seconds to execute. The front-end code seems quite efficient in comparison and took no time at all to respond. Before profiling, I purposely injected a piece of code into the SurfStoreApp.Data project that will block the current thread for around five seconds. Using MiniProfiler helped me to identify where the bottleneck lies.
After removing the blocking thread, MiniProfiler immediately reflects these changes. Figure 10.10 shows the Retrieve Products profiling block is running a lot quicker now, and your overall page load time has been reduced significantly.
Using MiniProfiler, you were able to identify easily a problem area in the code. The profiling blocks can be chained together and used in downstream methods, so you should be able to drill down continuously until you find the source of the problem in your code. By using a code profiler, you’re able to take any guesswork out of your performance issues.
One of the powerful features of MiniProfiler is its ability to profile databases, which can be pretty handy if you need to dig a little deeper into your application. It has built-in support for any kind of DbConnection, and it supports Entity Framework and Linq-2-SQL.
There may be a few inefficient queries running on your database, or you may find you’re executing the same query multiple times with different parameters. Database profiling quickly allows you to find queries you may be able to batch. The easiest way to use this feature is to use a factory to return your connection, as shown in listing 10.9.
If you use this connection when querying the database now, you’ll find that MiniProfiler will respond and display useful debugging information. Figure 10.11 shows detailed information about a query being executed multiple times.
This detailed information about your SQL queries can be very useful when identifying bottlenecks in your code. It’s easy to use the same query twice by mistake, but MiniProfiler has made it easy to pinpoint the issue.
Although 80 to 90% of end user response time is spent on the front end of a website, you may have poorly optimized code running behind the scenes of your application. In this chapter, you shifted focus and looked at different techniques that will enable you to fine-tune your ASP.NET MVC application to squeeze out that last bit of performance.
Often you’ll need to closely scrutinize the ASP.NET framework. Many standard out-of-the-box projects might contain unnecessary settings, such as too many view engines, that could add extra overhead to your application.
Running your code in Release mode is a vital part of deploying your application to a production environment. Running your code in Debug mode means your code isn’t running at its best. This is okay if you’re still developing your website, but it isn’t ideal if your website is on a live server. A web application running in Release mode is optimized for performance.
Quite often you’ll also need to dive a little deeper into the mechanics of your application and use a profiling tool to identify any bottlenecks. Free tools such as MiniProfiler easily integrate with ASP.NET MVC and can provide useful information that will allow you to pinpoint the exact source of your performance problems.
This chapter provided a good insight into the deeper workings of the ASP.NET MVC framework. By applying these fine-tuning improvements, you can ensure that your MVC application is running at its peak performance. In the next chapter, you’ll learn techniques to fine-tune an ASP.NET Web Forms application. You’ll also implement the MiniProfiler tool that will help you identify any problem areas in your website.
3.142.133.147