ASP.NET MVC is not designed to stand alone. As a web development framework, it inherits much of its power from the underlying ASP.NET platform, and that in turn from the .NET Framework itself (Figure 17-1).
Even though ASP.NET MVC's notions of controllers, views, and filters are flexible enough to implement almost any piece of infrastructure you'll need, to stop there would be missing the point. A good percentage of your work is already done out of the box if only you know how to leverage ASP.NET's built-in raft of time-saving facilities. There are just two problems:
Knowing what's there: We've all done it—you struggle for days or weeks to invent the perfect authentication or globalization infrastructure, and then some well-meaning colleague points out that ASP.NET already has the feature; you just need to enable it in Web.config
. Curses!
This ain't Web Forms: Much of ASP.NET's older infrastructure was designed with Web Forms in mind, and not all of it translates cleanly into the MVC world. While most platform features work flawlessly, others need the odd tweak or workaround, and some just don't work or aren't applicable.
The goal of this chapter is to address both of those problems. You'll learn about the most commonly used ASP.NET platform features that are relevant in an MVC application, as well as the tips and tricks needed to overcome compatibility problems. Even if you're an ASP.NET veteran, there's a good chance you'll find something you haven't used yet. This chapter will cover the following:
Just one thing before we get started—this chapter doesn't attempt to document all of these features in full detail—that would take hundreds of pages. Here, you'll see the basic usage of each feature in an MVC context, with discussion of any MVC-specific issues. It should be just enough for you to decide whether the feature is right for you. When you decide to pursue a particular feature, you may wish to consult a dedicated ASP.NET platform reference. I would recommend Pro ASP.NET 4 in C# 2010, by Matthew MacDonald (Apress, 2010).
In software terms, authentication means determining who somebody is. This is completely separate from authorization, which means determining whether a certain person is allowed to do a certain thing. Authorization usually happens after authentication. Appropriately, ASP.NET's authentication facility is concerned only with securely identifying visitors to your site, setting up a security context in which you can decide what that particular visitor is allowed to do.
The simplest way to do authentication is to delegate the task to IIS (but as I'll explain shortly, this is usually only suitable for intranet applications). Do this by specifying Windows Authentication in your Web.config
file, as follows:
<configuration>
<system.web>
<authentication mode="Windows" />
</system.web>
</configuration>
ASP.NET will then rely on IIS to establish a security context for incoming requests. IIS can authenticate incoming requests against the list of users known in your Windows domain or among the server's existing local user accounts, using one of the following supported mechanisms:
The visitor need not supply any credentials. Unauthenticated requests are mapped to a special anonymous user account.
The server uses RFC 2617's HTTP Basic authentication protocol, which causes the browser to pop up an Authentication Required prompt into which the visitor enters a name and password. These are sent in plain text with the request, so you should only use HTTP Basic authentication over an SSL connection.
Again, the server causes the browser to pop up an Authentication Required prompt, but this time the credentials are sent as a cryptographically secure hash, which is handy if you can't use SSL. Unfortunately, this mechanism only works for web servers that are also domain controllers, and even then it only works with Internet Explorer.
The server uses either Kerberos version 5 or NTLM authentication to establish identity transparently, without the visitor having to enter any credentials at all. This only works transparently when both the client and server machines are on the same Windows domain (or Windows domains configured to trust each other). If this isn't the case, it will cause an Authentication Required prompt to appear. This mode is widely used in corporate LANs, but isn't so suitable for use across the public Internet.
You can specify which of these options to allow using IIS 6 Manager (on your web site's Properties screen, go to Directory Security
If you're using IIS 7.x and some of these authentication mechanisms aren't available, you'll need to enable them on your server. Go to Control Panel
Windows Authentication has a few clear advantages:
It takes very little effort to set up, being mostly a matter of configuring IIS. You need not implement any kind of login or logout UI in your MVC application.
Since it uses your centralized Windows domain credentials, there is no need to administer a separate set of credentials, and users don't need to remember yet another password.
The Integrated option means users don't even need to slow down to enter a password, and identity is established securely without the need for SSL.
The key limitation to Windows Authentication is that it's usually suitable only for corporate intranet applications, because you need to have a separate Windows domain account for each user (and obviously you won't give out Windows domain accounts to everyone on the public Internet). For the same reason, you're unlikely to let new users register themselves, or even provide a UI to let existing users change their passwords.
When you're using Windows Authentication, perhaps for an intranet application hosted in a Windows domain, it's often reasonable to require authentication for all requests. That way, visitors are always logged in, and User.Identity.Name
will always be populated with the visitor's domain account name. To enforce this, be sure to configure IIS to disable anonymous access (Figure 17-2).
However, if you want to allow unauthenticated access to certain application features (such as your site's homepage) but enforce Windows Authentication for other application features (such as administrative pages), then you need to configure IIS to allow both anonymous access and one or more other authentication options (Figure 17-2). In this arrangement, anonymous access is considered to be the default. Authentication is triggered by any of the following scenarios:
The visitor is accessing a URL for which you've configured ASP.NET's URL-based authorization system, UrlAuthorizationModule
, not to allow anonymous visitors. This forces an HTTP 401 response, which causes the browser to perform authentication (opening an Authentication Required prompt if needed). As you'll see later, URL-based authorization is usually a bad choice for an ASP.NET MVC application.
The server is trying to access a file protected by the Windows access control list (ACL), and the ACL denies access to whatever identity you've configured anonymous authentication to use. Again, this causes IIS to send an HTTP 401 response. For an ASP.NET MVC application, you can only use ACLs to control access to the entire application, not to individual controllers or actions, because those controllers and actions don't correspond to files on disk.
The visitor is accessing a controller or action method decorated with ASP.NET MVC's [Authorize]
filter. That authorization filter rejects anonymous access by sending back an HTTP 401 response. You can optionally specify other parameters that restrict access to particular user accounts or roles, as described in more detail in Chapter 10—for example:
public class HomeController : Controller { // Allows anonymous access public ActionResult Index() { ... }// First enforces authentication, then authorizes by role
[Authorize(Roles="Admin")]
public ActionResult SomethingImportant() { ... } }
You have a custom authorization filter or some other custom code in your application that returns an HttpUnauthorizedResult
, or otherwise causes an HTTP 401 response.
The last two options are the most useful ones in an ASP.NET MVC application, because they give you complete control over which controllers and actions allow anonymous access and which require authentication.
Windows Authentication is usually suitable only for corporate intranet applications, so the framework provides a more widely used authentication mechanism called Forms Authentication. This one is entirely suitable for use on the public Internet, because instead of only authenticating Windows domain credentials, it works with an arbitrary credential store. It takes slightly more work to set up (you have to provide a UI for logging in and out), but it's infinitely more flexible.
Of course, the HTTP protocol is stateless, so just because someone logged in on the last request doesn't mean the server remembers them on the next. As is common across many web authentication systems, Forms Authentication uses browser cookies to preserve authentication status across requests. By default, it uses a cookie called .ASPXAUTH
(this is totally independent of ASP.NET_SessionId
, which tracks sessions). If you look at the contents of an .ASPXAUTH
cookie,[111] you'll see a string like this:
9CC50274C662470986ADD690704BF652F4DFFC3035FC19013726A22F794B3558778B12F799852B2E84 D34D79C0A09DA258000762779AF9FCA3AD4B78661800B4119DD72A8A7000935AAF7E309CD81F28
Not very enlightening. But if I call FormsAuthentication.Decrypt(
thatValue
)
, I find that it translates into a FormsAuthenticationTicket
object with the properties described in Table 17-1.
Table 17-1. Properties and Values on the Decrypted FormsAuthenticationTicket Object
Property | Type | Value |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The most important property here is Name
: that's the name that Forms Authentication will assign to the request processing thread's IPrincipal
(accessible via User.Identity
). It defines the logged-in user's name.
Of course, you can't decrypt my cookie value, because you don't have the same secret <machineKey>
value in your Web.config
file,[112] and that's the basis of Forms Authentication security. Because nobody else knows my <machineKey>
, they can't construct a valid .ASPXAUTH
cookie value on their own. The only way they can get one is to log in though my login page, supplying valid credentials—then I'll tell Forms Authentication to assign them a valid .ASPXAUTH
value.
When you create a blank new ASP.NET MVC 2 application, the default project template enables Forms Authentication for you by default. The default Web.config
file includes the following:
<authentication mode="Forms"> <forms loginUrl="~/Account/LogOn" timeout="2880"/> </authentication>
This simple configuration is good enough to get you started. If you want more control over how Forms Authentication works, check out the options listed in Table 17-2, which can all be applied to your Web.config
file's <forms>
node.
Table 17-2. Attributes You Can Configure on Web.Config's <forms> Node
Option | Default If Not Specified | Meaning |
---|---|---|
|
| This is the name of the cookie used to store the authentication ticket. |
|
| This is the duration (in minutes) after which authentication cookies expire. Note that this is enforced on the server, not on the client: authentication cookies' encrypted data packets contain expiration information. |
|
| If |
None | If set, this assigns the authentication cookie to the given domain. This makes it possible to share authentication cookies across subdomains (e.g., if your application is hosted at | |
|
| This sets the authentication cookie to be sent only to URLs below the specified path. This lets you host multiple applications on the same domain without exposing one's authentication cookies to another. |
|
| When Forms Authentication wishes to demand a login, it redirects the visitor to this URL. |
|
| This attempts to keep track of authentication across requests without using cookies. You'll hear more about this shortly. |
|
| If you set this to |
[a] Notice the leading dot character. This is necessary because the HTTP specification demands that a cookie's domain property must contain at least two dots. That's inconvenient if during development you want to share cookies between |
If you are even slightly concerned about security, you must always set requireSSL
to true
. At the time of writing, unencrypted public wireless networks and WEP wireless networks are prevalent around the world (note that WEP is insecure). Your visitors are likely to use them, and then when your .ASPXAUTH
cookie is sent over an unencrypted HTTP connection—either because your application does that by design, or because an attacker forced it by injecting spoof response—it can easily be read by anyone in the vicinity. This is similar to session hijacking, as discussed in Chapter 13.
There are other configuration options, but these are the ones you're most likely to use.
As an alternative to editing the <forms>
configuration node by hand, you can also use IIS 7.x's Authentication configuration tool, which edits Web.config
on your behalf. To do this, open the Authentication tool, and then right-click and enable Forms Authentication. Next, right-click Forms Authentication and choose Edit to configure its settings (see Figure 17-3).
With Forms Authentication enabled in your Web.config
file, when an unauthenticated visitor tries to access any controller or action marked with [Authorize]
(or any action that returns an HttpUnauthorizedResult
), they'll be redirected to your login URL.
Naturally, you need to add an appropriate controller to handle requests to your login URL. Otherwise, visitors will just get a 404 Not Found error. This controller must do the following:
Display a login prompt.
Receive a login attempt.
Validate the incoming credentials.
If the credentials are valid, call FormsAuthentication.SetAuthCookie()
, which will give the visitor an authentication cookie. Then, redirect the visitor away from the login page.
If the credentials are invalid, redisplay the login screen with a suitable error message.
For examples of how to do this, refer either to the default AccountController
included in any newly created ASP.NET MVC application, or to the simplified AccountController
used in the SportsStore example in Chapter 6.
Note that SportsStore's AccountController
validates incoming credentials by calling FormsAuthentication.Authenticate()
, which looks for credentials stored in a <credentials>
node in Web.config
. Storing credentials in Web.config
is occasionally OK for smaller applications where the list of authenticated users isn't likely to change over time, but you should be aware of two main limitations:
The <credentials>
node can hold passwords in plain text—which gives the whole game away if anyone sees the file—or it lets you store hashed versions of the passwords using either MD5 or SHA1 hashing algorithms. However, it doesn't let you use any salt in the hashing, so if an attacker manages to read your Web.config
file, there's a good chance they could recover the original passwords using a rainbow table attack.[113]
What about administration? Who's going to keep your Web.config
file up to date when you have a thousand users changing their passwords every day? Bear in mind that each time Web.config
changes, your application gets reset, wiping out the cache and everyone's Session
store.
To avoid these limitations, don't store credentials in Web.config
, and don't use FormsAuthentication.Authenticate()
to validate login attempts. You can either implement your own custom credential store, or you can use ASP.NET's built-in Membership facility, which you'll learn about shortly.
The Forms Authentication system supports a rarely used cookieless mode, in which authentication tickets are preserved by stashing them into URLs. As long as each link on your site contains the visitor's authentication ticket, then the visitor will have the same logged-in experience without their browser needing to permit or even support cookies.
Why wouldn't someone permit cookies? These days, most people will. It's understood that a lot of web applications don't function correctly if you don't allow cookies, so, for example, most web mail services will just kick such visitors out, saying, "Sorry, this service requires cookies." Nonetheless, if your situation demands it, perhaps because visitors use older mobile devices that won't allow cookies, you can switch to cookieless mode in your Web.config
file, as follows:
<authentication mode="Forms">
<forms loginUrl="~/Account/LogOn" timeout="2880" cookieless="UseUri">
</forms>
</authentication>
Once a visitor logs in, they'll be redirected to a URL like this:
/(F(nMD9DiT464AxL7nlQITYUTT05ECNIJ1EGwN4CaAKKze-9ZJq1QTOK0vhXTx0fWRjAJdgSYojOYyhDil HN4SRb4fgGVcn_fnZU0x55I3_Jes1))/Home/ShowPrivateInformation
Look closely, and you'll see it follows the pattern /(F
(authenticationData
))/normalUrl
. The authentication data replaces (but is not the same as) what would otherwise have been persisted in the .ASPXAUTH
cookie. Of course, this won't match your routing configuration, but don't worry—the platform will rewrite incoming URLs to extract and remove the authentication information before the routing system gets to see those URLs. Plus, as long as you only ever generate outbound URLs using the MVC Framework's built-in helpers (such as Html.ActionLink()
), the authentication data will automatically be prepended to each URL generated. In other words, it just works.
Don't use cookieless authentication unless you really have to. It's ugly (look at those URLs!), fragile (if there's one link on your site that doesn't include the token, a visitor can suddenly be logged out), and insecure. If somebody shares a link to your site, taking the URL from their browser's address bar, anybody following the link will unintentionally hijack the first person's identity. Also, if your site displays any images hosted on third-party servers, those supposedly secret URLs will get sent to that third party in the browser's Referer
header.
Another one of the great conventions of the Web is user accounts. Where would we be without them? Then there's all the usual related stuff: registration, changing passwords, setting personal preferences, and so forth.
Since version 2.0, ASP.NET has included a standard user accounts infrastructure. It's designed to be flexible: it consists of a set of APIs that describe the infrastructure, along with some general purpose implementations of those APIs. You can mix and match the standard implementation pieces with your own, with compatibility assured by the common API. The API comes in three main parts:
Membership, which is about registering user accounts and accessing a repository of account details and credentials
Roles, which is about putting users into a set of (possibly overlapping) groups, typically used for authorization
Profiles, which lets you store arbitrary data on a per-user basis (e.g., personal preferences)
An implementation of a particular API piece is called a provider. Each provider is responsible for its own data storage. The framework comes with some standard providers that store data in SQL Server in a particular data schema, some others that store it in Active Directory, and so on. You can create your own provider by deriving a class from the appropriate abstract base class.
On top of this, the framework comes with a set of standard Web Forms server controls that use the standard APIs to provide UIs for common tasks like user registration. These controls, being reliant on postbacks, aren't really usable in an MVC application, but that's OK—you can create your own without much difficulty, as you're about to see.
This architecture is depicted in Figure 17-4.
The advantages of using the built-in Membership, Roles, and Profiles system are as follows:
Microsoft has already gone through a lengthy research and design process to come up with a system that works well in many cases. Even if you just use the APIs (providing your own storage and UI), you are working to a sound design.
For some simple applications, the built-in storage providers eliminate the work of managing your own data access. Given the clear abstraction provided by the API, you could in the future upgrade to using a custom storage provider without needing to change any UI code.
The API is shared across all ASP.NET applications, so you can reuse any custom providers or UI components across projects.
It integrates well with the rest of ASP.NET. For example, User.IsInRole()
is the basis of many authorization systems, and that obtains role data from your selected roles provider.
For some smaller, intranet-type applications, you can use ASP.NET's built-in management tools, such as the Web Administration Tool or IIS 7.x's Membership, Roles, and Profiles configuration tools, to manage your user data without needing to create any UI of your own.
And, of course, there are disadvantages:
The built-in SQL storage providers need direct access to your database, which feels a bit dirty if you have a strong concept of a domain model or use a particular ORM technology elsewhere.
The built-in SQL storage providers demand a specific data schema that isn't easy or tidy to share with the rest of your application's data schema. SqlProfileProvider
uses an especially disgusting database schema, in which profile entries are stored as colon-separated name/value pairs, so it's basically impossible to query.
As mentioned, the built-in server controls don't work in an MVC application, so you will need to provide your own UI.
While you can use the Web Administration Tool to manage your user data, it's not supposed to be deployed to a production web server, and even if you do deploy it, it looks and feels nothing like the rest of your application.
Overall, it's worth following the API because of the clear separation of concerns, reuse across projects, and integration with the rest of ASP.NET, but you'll only want to use the built-in SQL storage providers for small or throwaway projects.
The framework comes with membership providers for SQL Server (SqlMembershipProvider
) and Active Directory (ActiveDirectoryMembershipProvider
). These two are the most commonly used, so they are the ones you'll learn about in this chapter. Many other prebuilt membership providers are just a web search away, including ones based around Oracle, NHibernate, and XML files.
When you create a new ASP.NET MVC 2 application (except when using the Empty project template), it's configured to use SqlMembershipProvider
by default. Your Web.config
file will initially include the following entries:
<configuration> <connectionStrings><add name="ApplicationServices"
connectionString="data source=.SQLEXPRESS;Integrated Security=SSPI;
AttachDBFilename=|DataDirectory|aspnetdb.mdf;
User Instance=true"
providerName="System.Data.SqlClient" />
</connectionStrings> <system.web> <membership> <providers> <clear/><add name="AspNetSqlMembershipProvider"
type="System.Web.Security.SqlMembershipProvider"
connectionStringName="ApplicationServices"
... />
</providers> </membership> </system.web> </configuration>
If you use the ASP.NET MVC 2 Empty Web Application project template, it doesn't prepopulate any connection strings or membership providers in Web.config
(it's supposed to be empty, of course). So, to use SqlMembershipProvider
, you'll need to add configuration along the lines shown here, or copy it from a nonempty ASP.NET MVC 2 application.
SQL Server 2005 Express Edition and SQL Server 2008 Express Edition both support user instance databases. Unlike regular SQL Server databases, these databases don't have to be created and registered in advance. You simply open a connection to SQL Server Express saying where the database's .mdf
file is stored on disk. SQL Server Express will open the .mdf
file, creating it on the fly first if needed. This can be convenient in simple web hosting scenarios because, for instance, you don't even have to configure SQL logins or users.
Notice how this is configured in the preceding Web.config
settings. The default connection string specifies User Instance=true
. The special AttachDBFilename
syntax tells the system to create a SQL Server Express user instance database at ~/App_Data/aspnetdb.mdf
. When ASP.NET first creates the database, it will prepopulate it with all the tables and stored procedures needed to support the Membership, Roles, and Profiles features.
If you plan to store your data in SQL Server Express edition—and not in any other edition of SQL Server—then you can leave these settings as they are. However, if you intend to use a non-Express edition of SQL Server, you must create your own database and prepare its schema manually, as I'll describe next.
These default settings assume you have an Express edition of SQL Server installed locally. If you don't, any attempt to use SqlMembershipProvider
will result in an error saying, "SQLExpress database file autocreation error." You must either install SQL Server Express locally, change the connection string to refer to a different server where SQL Server Express is installed, or change the connection string to refer to a database that you've already prepared manually.
If you want to use a non-Express edition of SQL Server (i.e., any of the for-pay editions), then you'll need to create your own database in the usual way through SQL Server Management Studio or Visual Studio. To add the schema elements required by SqlMembershipProvider
, run the tool aspnet_regsql.exe
(without specifying any command-line arguments), which is in your .NET Framework directory.[114] This tool includes the screen shown in Figure 17-5.
Once you've told it how to find your database, it adds a set of tables and stored procedures that support the Membership, Roles, and Profiles features, all prefixed by aspnet_
(Figure 17-6). You should then set your connection string in Web.config
to refer to your manually created database.
Visual Studio ships with a tool called the Web Administration Tool (WAT). It's a GUI for managing your site's settings, including your Membership, Roles, and Profiles data. Launch it from Visual Studio by selecting the menu item Project
Internally, the WAT uses the Membership APIs to talk to your default membership provider, so the WAT is compatible with any MembershipProvider
, including any custom one you might create.
When you finally deploy your application to a production web server, you'll find that the WAT isn't available there. That's because the WAT is part of Visual Studio, which you're unlikely to have installed on the web server. It is technically possible to deploy the WAT to your web server (see http://forums.asp.net/p/1010863/1761029.aspx
), but it's tricky, so in reality you're more likely to develop your own UI using the Membership APIs. Or, if you're running IIS 7.x, you can use its .NET Users configuration tool.
Among IIS 7.x Manager's many brightly colored icons, you'll find .NET Users (Figure 17-8).
As well as allowing you to create, edit, and delete members, this tool also lets you configure a default membership provider. Just like the WAT, it edits your application's root Web.config
file on your behalf, and it uses the Membership APIs to communicate with your registered MembershipProvider
.
Unlike the WAT, the .NET Users tool will be available on your production server (assuming it runs IIS 7.x). It's therefore a very quick way to get basic member management functionality for small applications where membership is managed only by your server administrator.
At the time of writing, IIS 7.x Manager's .NET Users tool doesn't work with the default membership providers for .NET 4 applications—it fails, saying, "This feature cannot be used because the default provider type could not be determined to check whether it is a trusted provider." For information about this bug, see http://tinyurl.com/y6vrtqv
. As that web page explains, the current workaround involves manually editing Web.config
to use the .NET 3.5 version of SqlMembershipProvider
.
It's likely that you'll want to use your membership provider to validate login attempts. This is very easy! For example, to upgrade SportsStore to work with your membership provider, just change one line of code in AccountController
's LogOn()
method as follows:
[HttpPost] public ActionResult LogOn(LogOnViewModel model, string returnUrl) { if (ModelState.IsValid) // No point trying authentication if model is invalidif (!Membership.ValidateUser(model.UserName, model.Password))
ModelState.AddModelError("", "Incorrect username or password");... rest as before ...
}
Previously, this method validated login attempts by calling FormsAuthentication.Authenticate
(username, password
), which looks for credentials in a <credentials>
node in Web.config
. Now, however, it will only accept login attempts that match valid credentials known to your active membership provider.
In many cases, you might decide that ASP.NET's built-in membership providers aren't appropriate for your application. ActiveDirectoryMembershipProvider
is only applicable in certain corporate domain scenarios, and SqlMembershipProvider
uses its own custom SQL database schema, which you might not want to mix with your own schema.
You can create a custom membership provider by deriving a class from MembershipProvider
. Start by writing the following:
public class MyNewMembershipProvider : MembershipProvider { }
and then right-click MembershipProvider and choose Implement Abstract Class. You'll find there are quite a lot of methods and properties—currently all throwing a NotImplementedException
—but you can leave most of them as they are. To integrate with Forms Authentication, the only method that you strictly need to attend to is ValidateUser()
. Here's a very simple example:
public class SiteMember { public string UserName { get; set; } public string Password { get; set; } } public class SimpleMembershipProvider : MembershipProvider { // For simplicity, just working with a static in-memory collection // In any real app you'd need to fetch credentials from a database private static List<SiteMember> Members = new List<SiteMember> { new SiteMember { UserName = "MyUser", Password = "MyPass" } }; public override bool ValidateUser(string username, string password) { return Members.Exists(m => (m.UserName==username)&&(m.Password==password)); } /* Omitted: All the other methods just throw NotImplementedException */ }
Once you've created your custom membership provider, register it in your Web.config
file as follows:
<configuration> <system.web> <membershipdefaultProvider="MyMembershipProvider">
<providers> <clear/><add name="MyMembershipProvider"
type="
Namespace
.SimpleMembershipProvider"/>
</providers> </membership>
</system.web> </configuration>
If you want your custom membership provider to support adding and removing members, integrating with the WAT and IIS 7.x's .NET Users GUI, then you'll need to add behavior to other overridden methods such as CreateUser()
and GetAllUsers()
.
Even though it's very easy to create your own custom membership provider and use it in your application, it can be harder to make the .NET Users GUI in IIS 7.5 cooperate with a custom provider. To make IIS 7.5's .NET Users GUI work with a custom membership provider, you must put your provider in a strongly named .NET assembly, register it in the server's GAC, and also reference it in the server's Administration.config
file.
So far, you've seen how the framework manages your application's set of credentials and validates login attempts (via a membership provider), and how it keeps track of a visitor's logged-in status across multiple requests (via Forms Authentication). Both of these are matters of authentication, which means securely identifying who a certain person is.
The next common security requirement is authorization, which means deciding what a certain person is allowed to do. The framework offers a system of role-based authorization, by which each member can be assigned to a set of roles, and their membership of a given role is understood to denote authorization to perform certain actions. A role is merely a unique string, and it only has meaning in that you choose to associate meanings with certain strings. For example, you might choose to define three roles:
ApprovedMember
CommentsModerator
SiteAdministrator
These are just arbitrary strings, but they gain meaning when, for example, your application grants administrator console access only to members in the SiteAdministrator
role.
Each role is totally independent of the others—there's no hierarchy—so being a SiteAdministrator
doesn't automatically grant the CommentsModerator
role or even the ApprovedMember
role. Each one must be assigned independently; a given member can hold any combination of roles.
Just as with membership, the ASP.NET platform expects you to work with roles through its provider model, offering a common API (the RoleProvider
base class) and a set of built-in providers you can choose from. And of course, you can implement your own custom provider.
Also as with membership, you can manage roles (and grant or deny roles to members) using either the WAT or IIS 7.x's .NET Roles and .NET Users configuration tools, as shown in Figure 17-9.
Just like the .NET Users tool, the .NET Roles tool in IIS 7.x doesn't currently work with the default roles providers for .NET 4 applications. See the preceding coverage of the .NET Users tool for a possible workaround.
In most cases—and not just because of the incompatibility with .NET 4—it will be more useful not to use the built-in tools, and instead create your own custom administration screens within your application. You can manage roles using the static System.Web.Security.Roles
object, which represents your default membership provider. For example, you can use the following to add a user to a role:
Roles.AddUserToRole("billg", "CommentsModerator");
If you're using SqlMembershipProvider
, you'll find SqlRoleProvider
to be a very quick and convenient way to get role-based authorization into your application.[115] The Web.config
file in a brand new ASP.NET MVC 2 nonempty application contains the following settings:
<configuration>
<system.web>
<roleManager enabled="false">
<providers>
<clear/>
<add name="AspNetSqlRoleProvider"
type="System.Web.Security.SqlRoleProvider"
connectionStringName="ApplicationServices"
applicationName="/" />
<add name="AspNetWindowsTokenRoleProvider"
type="System.Web.Security.WindowsTokenRoleProvider"
applicationName="/" />
</providers>
</roleManager>
</system.web> </configuration>
As you can see, two possible role providers are listed, but neither is enabled by default. To enable SqlRoleProvider
, change the <roleManager>
node's attributes as follows:
<roleManager enabled="true" defaultProvider="AspNetSqlRoleProvider">
Assuming you've already created the database schema as explained for SqlMembershipProvider
, your role provider is now ready to work. Alternatively, you can nominate AspNetWindowsTokenRoleProvider
as the default role provider if you're using Windows Authentication and would like users' roles to be determined by their Windows Active Directory roles.
You've seen how to use ASP.NET MVC's built-in [Authorize]
filter to restrict access only to authenticated visitors. You can restrict access further, authorizing only authenticated visitors who are in a particular role—for example:
[Authorize(Roles="CommentsModerator, SiteAdministrator")]
public ViewResult ApproveComment(int commentId) {
// Implement me
}
When you specify multiple comma-separate roles, the visitor is granted access if they are in any one of those roles. The [Authorize]
filter is covered in more detail in Chapter 10. You can secure an entire controller by assigning the [Authorize(Roles=...)]
attribute to the controller class instead of to an individual action method.
If you want further programmatic access to role information, your action methods can call User.IsInRole(
roleName
)
to determine whether the current visitor is in a particular role, or System.Web.Security.Roles.GetRolesForUser()
to list all the roles held by the current visitor.
Not surprisingly, you can create a custom role provider by deriving a type from the RoleProvider
base class. As before, you can use Visual Studio's Implement Abstract Class shortcut to satisfy the type definition without writing any real code.
If you don't need to support online role management (e.g., using the IIS 7.x .NET Roles configuration tool or the WAT), you only need to put real code in GetRolesForUser()
, as in the following example:
public class MyRoleProvider : RoleProvider
{ public override string[] GetRolesForUser(string username) { // Your real provider should probably fetch roles info from a database if (username == "Steve") return new string[] { "ApprovedMember", "CommentsModerator" }; else return new string[] { }; } /* Omitted: Everything else throws a NotImplementedException */ }
To use this custom role provider, edit your Web.config
's <roleManager>
node to nominate this class as the default provider.
Membership keeps track of your members, and Roles keeps track of what they're allowed to do. But what if you want to keep track of other per-user data like "member points" or "site preferences" or "favorite foods"? That's where Profiles comes in: it's a general purpose, user-specific data store that follows the platform's familiar provider pattern.
It's an appealing option for smaller applications that are built around SqlMembershipProvider
and SqlRoleProvider
, because it uses the same database schema, so it feels like you're getting something for nothing. In larger applications, though, where you have a custom database schema and a stronger notion of a domain model, you will probably have different, better infrastructure for storing per-user data specific to your application, so you would not really benefit from using Profiles.
I'm sure you've spotted the pattern by now: once you've created the Membership/Roles/Profiles database schema using the aspnet_regsql.exe
tool (or let it be created automatically if you're using SQL Server Express Edition with a file-based database), you can use a built-in profile provider called SqlProfileProvider
. It's enabled by default in new ASP.NET MVC 2 (nonempty) projects, because Web.config
contains the following:
<configuration> <system.web> <profile> <providers> <clear/><add name="AspNetSqlProfileProvider"
type="System.Web.Profile.SqlProfileProvider"
connectionStringName="ApplicationServices"
applicationName="/" />
</providers> </profile> </system.web> </configuration>
Before you can read or write profile data, you need to define the structure of the data you want to work with. Do this by adding a <properties>
node under <profile>
inside Web.config
—for example:
<profile> <providers>...</providers><properties>
<add name="Name" type="String" />
<add name="PointsScored" type="Integer" />
<group name="Address">
<add name="Street" type="String" />
<add name="City" type="String" />
<add name="ZipCode" type="String" />
<add name="State" type="String" />
<add name="Country" type="String" />
</group>
</properties>
</profile>
As you can see, properties can be put into groups, and for each one, you must specify its .NET type. You can use any .NET type as long as it's serializable.
Unless you implement a custom profile provider, there's a performance penalty for using anything other than the most basic types (string, int
, etc.). Because SqlProfileProvider
can't detect whether a custom object has been modified during a request, it writes a complete set of updated profile information to your database at the end of every request.
With this configuration in place, you can read and write per-user profile data in your action methods:
public ActionResult ShowMemberNameAndCountry () { ViewData["memberName"] = HttpContext.Profile["Name"]; ViewData["memberCountry"] = HttpContext.Profile.GetProfileGroup("Address")["Country"]; return View(); } public RedirectToRouteResult SetMemberNameAndCountry(string name, string country) { HttpContext.Profile["Name"] = name; HttpContext.Profile.GetProfileGroup("Address")["Country"] = country; return RedirectToAction("ShowMemberNameAndCountry"); }
The framework loads the logged-in visitor's profile data the first time you try to access one of its values, and saves any changes at the end of the request. You don't have to explicitly save changes—it happens automatically. Note that by default this only works for logged-in, authenticated visitors, and will throw an exception if you attempt to write profile properties when the current visitor isn't authenticated.
The designers of this feature intended you to access profile data through a strongly typed proxy class automatically generated from your <properties>
configuration (e.g., Profile.Address.Country
). Unfortunately, this proxy class is only generated automatically if you're using a Visual Studio web project, not a Visual Studio web application. ASP.NET MVC 2 applications are web applications, not web projects, so this proxy class won't be generated. If you really want the strongly typed proxy class, check out the Web Profile Builder project, which at the time of writing is only available for Visual Studio 2005 and 2008 (http://code.msdn.microsoft.com/WebProfileBuilder
).
The framework also supports a notion of anonymous profiles, in which profile data is associated with unregistered visitors and can be persisted across browsing sessions. To enable this, first flag one or more profile property definitions in Web.config
with allowAnonymous
:
<profile>
<properties>
<add name="Name" type="String" allowAnonymous="true" />
</properties>
</profile>
Next, make sure you have enabled anonymous identification in Web.config
:
<configuration>
<system.web>
<anonymousIdentification enabled="true" />
</system.web>
</configuration>
This means that ASP.NET will track unauthenticated visitors by giving them a cookie called .ASPXANONYMOUS
, which by default expires after 10,000 minutes (that's just less than 70 days). There are various options you can specify on <anonymousIdentification>
, such as the name of the tracking cookie, its duration, and so on.
This configuration makes it possible to read and write profile properties for unauthenticated visitors (in this example, just the Name
property), but beware that every unauthenticated visitor will now result in a separate user account being saved in your database.
As is usual for ASP.NET's provider model, you can create a custom profile provider by deriving a class from the abstract base class, ProfileProvider
. Unless you want to support profile management though the WAT or IIS 7.x's .NET Profiles configuration tool, you only need to add code to the GetPropertyValues()
and SetPropertyValues()
methods.
The following example does not save any state to a database, and is not thread safe, so it's not entirely realistic. However, it does demonstrate how the ProfileProvider
API works, and how you can access the individual profile data items that you're expected to load and save.
public class InMemoryProfileProvider : ProfileProvider { // This is an in-memory collection that never gets persisted to disk // Warning: For brevity, no attempt is made to keep this thread safe // The keys in this dictionary are user names; the values are // dictionaries of profile data for that user private static IDictionary<string, IDictionary<string, object>> _data = new Dictionary<string, IDictionary<string, object>>(); public override SettingsPropertyValueCollection GetPropertyValues( SettingsContext context, SettingsPropertyCollection collection) { // See if we've got a record of that user's profile data IDictionary<string, object> userData; _data.TryGetValue((string)context["UserName"], out userData); // Now build and return a SettingsPropertyValueCollection var result = new SettingsPropertyValueCollection(); foreach (SettingsProperty prop in collection) { var spv = new SettingsPropertyValue(prop); if (userData != null) // Use user's profile data if available spv.PropertyValue = userData[prop.Name]; result.Add(spv); } return result; } public override void SetPropertyValues(SettingsContext context, SettingsPropertyValueCollection collection) { string userName = (string)context["UserName"]; if (string.IsNullOrEmpty(userName)) return; // Simply converts SettingsPropertyValueCollection to a dictionary _data[userName] = collection.Cast<SettingsPropertyValue>() .ToDictionary(x => x.Name, x => x.PropertyValue); } /* Omitted: Everything else throws NotImplementedException */ }
In your custom provider, you can ignore the idea of property groups and think of the data as a flat key/value collection, because the API works in terms of fully qualified dot-separated property names, such as Address.Street
. You don't have to worry about anonymous profiles either—if these are enabled, ASP.NET will generate a GUID as the username for each anonymous user. Your code doesn't have to distinguish between these and real usernames.
Of course, to use your custom profile provider, you need to register it in Web.config
using the <profile>
node.
Historically, ASP.NET has been so heavily dependent on URLs matching the project's source code folder structure that it made a lot of sense to define authorization rules in terms of URL patterns. Many Web Forms applications, for example, keep all of their administration ASPX pages in a folder called /Admin/
; this means you can use the URL-based authorization feature to restrict access to /Admin/*
only to logged-in users in some specific role. You might also set up a special-case rule so that logged-out visitors can still access /Admin/Login.aspx
.
ASP.NET MVC works with the completely flexible core routing system, so it doesn't always make sense to configure authorization in terms of URL patterns—you might prefer the fidelity of attaching [Authorize]
filters to specific controllers and actions instead. On the other hand, sometimes it does make sense to enforce authorization in terms of URL patterns, because by your own convention, administrative URLs might always start with /Admin/
(e.g., if you're using the areas feature and have an area called Admin
).
If you do want to use URL-based authorization in an MVC application, you can set it up using the WAT, or you can edit your Web.config
file directly. For example, place the following immediately above (and outside) your <system.web>
node:
<location path="Admin"> <system.web> <authorization> <deny users="?"/> <allow roles="SiteAdmin"/> <deny users="*"/> </authorization> </system.web> </location>
This tells UrlAuthorizationModule
(which is registered for all ASP.NET applications by default) that for the URL ~/Admin
and URLs matching ~/Admin/*
, it should do the following:
Deny access for unauthenticated visitors (<deny users="?"/>
)
Allow access for authenticated visitors in the SiteAdmin
role (<allow roles="SiteAdmin"/>
)
Deny access to all other visitors (<deny users="*"/>
)
When visitors are denied access, UrlAuthorizationModule
sets up an HTTP 401 response, (meaning "not authorized"), which invokes your active authentication mechanism. If you are using Forms Authentication, this means the visitor will be redirected to your login page (whether or not they are already logged in).
In most cases, it's more logical to define authorization rules on controllers and actions using [Authorize]
filters than on URL patterns in Web.config
, because you may want to change your URL schema without worrying that you're creating security loopholes.
Most web applications need to be configurable, for two main reasons:
So that in different deployment environments you can attach them to different external resources. For example, you may need to provide connection strings for databases, URLs for web services, or disk paths for file storage locations.
So that you can vary their behavior—for example, to enable or disable features depending on your clients' requirements.
The core ASP.NET platform provides a good range of configuration facilities, from the simple to the sophisticated. Don't store your application configuration data in the server's registry (which is very hard to deploy and manage in source control), and don't store your configuration data in custom text files (which you must manually parse and cache). Instead, make your job easier by using the built-in WebConfigurationManager
API.
The WebConfigurationManager
API is great for reading configuration settings out of your Web.config
file—it's much easier than retrieving configuration settings from a database table. What's more, WebConfigurationManager
can write changes and new values back into your Web.config
file. However, for performance, scalability, and security reasons,[116] you should avoid writing changes to Web.config
frequently, and consider storing frequently updated settings (such as user preferences) in your application's database instead. WebConfigurationManager
is best for the sort of settings that don't change between deployments, such as network addresses, disk paths, or anything controlled only by the server administrator.
Because it's such a common requirement, ASP.NET has a special API for configuring connection strings. If you add entries to your Web.config
file's <connectionStrings>
node, such as the following:
<configuration> <connectionStrings> <add name="MainDB" connectionString="Server=myServer;Database=someDB; ..."/> <add name="AuditingDB" connectionString="Server=audit01;Database=myDB; ..."/> </connectionStrings> </configuration>
then you can access those values via WebConfigurationManager.ConnectionStrings
—for example:
string connectionString = WebConfigurationManager.ConnectionStrings["MainDB"];
In Chapter 4, you saw how to apply this technique to retrieve a connection string and use it to configure SportsStore's DI container with Ninject's Bind<
service
> ... WithConstructorArgument(...)
syntax.
If you need a simple way to configure anything other than connection strings, you can use the Web.config
file's <appSettings>
node, which accepts arbitrary key/value pairs—for example:
<configuration> <appSettings> <add key="Mailer.ServerHost" value="smtp.example.com"/> <add key="Mailer.ServerPort" value="25"/> <add key="Uploader.TempDirectory" value="e:webdatauploadedFiles"/> </appSettings> </configuration>
Then you can access those values using WebConfigurationManager.AppSettings
as follows:
string host = WebConfigurationManager.AppSettings["Mailer.ServerHost"]; int port = int.Parse(WebConfigurationManager.AppSettings["Mailer.ServerPort"]);
Since <appSettings>
doesn't give you any built-in way to put related settings into groups, you'll need to establish your own naming conventions to keep things organized and avoid key clashes. A common technique is to use keys called componentName.settingName
, as I showed in the preceding code snippet. The framework doesn't care about the dots—it just requires the entire key to be unique.
Sometimes you'll want to configure data structures that are more complex than simple key/value pairs. For example, you might want to configure an ordered list or a hierarchy of settings, which would be difficult to express as entries in a key/value collection.
To configure an arbitrary list or hierarchy of structured settings, start simply by representing those settings as free-form XML in your Web.config
file's <configuration>
node—for example:
<configuration> <mailServers> <server host="smtp1.example.com" portNumber="25"> <useFor domain="example.com"/> <useFor domain="staff.example.com"/> <useFor domain="alternative.example"/> </server> <server host="smtp2.example.com" portNumber="5870"> <useFor domain="*"/> </server> </mailServers> </configuration>
Note that ASP.NET has no native concept of a <mailServers>
node—this is just arbitrary XML of my choice. Next, create an IConfigurationSectionHandler
class that can understand this XML. You just need to implement a Create()
method that receives the custom data as an XmlNode
called section
, and transforms it into a strongly typed result. This example produces a list of MailServerEntry
objects:
public class MailServerEntry { public string Hostname { get; set; } public int PortNumber { get; set; } public List<string> ForDomains { get; set; } } public class MailServerConfigHandler : IConfigurationSectionHandler { public object Create(object parent, object configContext, XmlNode section) { return section.SelectNodes("server").Cast<XmlNode>() .Select(x => new MailServerEntry { Hostname = x.Attributes["host"].InnerText, PortNumber = int.Parse(x.Attributes["portNumber"].InnerText), ForDomains = x.SelectNodes("useFor") .Cast<XmlNode>() .Select(y => y.Attributes["domain"].InnerText) .ToList() }).ToList(); } }
Since ASP.NET 2.0, instead of creating an IConfigurationSectionHandler
class, you have the alternative of using the newer ConfigurationSection
API instead. That lets you put .NET attributes onto configuration wrapper classes, declaratively associating class properties with configuration attributes. The new API is also more sophisticated, as it deals with inheriting and overriding configuration between parent and child configuration files.
However, in my experience, the new API significantly increases the amount of code you have to write in many routine scenarios. I often find it quicker and simpler to implement IConfigurationSectionHandler
manually, and to populate my configuration object using an elegant LINQ query, as shown in this example.
Finally, register your custom configuration section and its IConfigurationSectionHandler
class by adding a new node to your Web.config
file's <configSections>
node:
<configuration> <configSections><section name="mailServers" type="
namespace
.MailServerConfigHandler
,assembly
"/>
</configSections> </configuration>
Then you can access your configuration data anywhere in your code using WebConfigurationManager.GetSection()
:
IList<MailServerEntry> servers = WebConfigurationManager.GetSection("mailServers") as IList<MailServerEntry>;
One of the nice things about WebConfigurationManager.GetSection()
is that, internally, it caches the result of your IConfigurationSectionHandler
's Create()
method call, so it doesn't repeat the XML parsing every time a request needs to access that particular configuration section. The cached value expires only when your application is recycled (e.g., after you edit and save your Web.config
file).
If you have some data that you want to retain across multiple requests, you could store it in the Application
collection. For example, an action method might contain the following line:
HttpContext.Application["mydata"] = someImportantData;
The someImportantData
object will remain alive for as long as your application runs, and will always be accessible at HttpContext.Application["mydata"]
. It might seem, therefore, that you can use the Application
collection as a cache for objects or data that are expensive to generate. Indeed, you can use Application
that way, but you'll need to manage the cached objects' lifetimes yourself; otherwise, your Application
collection will grow and grow, consuming an unlimited amount of memory.
It's much better to use the framework's Cache
data structure (System.Web.Caching.Cache
)—it has sophisticated expiration and memory management facilities already built in, and your controllers can easily access an instance of it via HttpContext.Cache
. You will probably want to use Cache
for the results of any expensive computations or data retrieval, such as calls to external web services.
HttpContext.Cache
does data caching, which is quite different from output caching. Output caching records the HTML response sent by an action method, and replays it for subsequent requests to the same URL, reducing the number of times that your action method code actually runs. For more about output caching, see the section "The [OutputCache] Filter" in Chapter 10. Data caching, on the other hand, gives you the flexibility to cache and retrieve arbitrary objects and use them however you wish.
The simplest usage of Cache
is as a name/value dictionary: assign a value to HttpContext.Cache[key]
, and then read it back from HttpContext.Cache[key]
. The data is persisted and shared across all requests, being automatically removed when memory pressure reaches a certain level or after the data remains unused for a sufficiently long period.
You can put any .NET object into Cache
—it doesn't even have to be serializable, because the framework holds it in memory as a live object. Items in the Cache
won't be garbage-collected, because the Cache
holds a reference to them. Of course, that also means that the entire object graph reachable from a cached object can't be garbage-collected either, so be careful not to cache more than you had in mind.
Rather than simply assigning a value to HttpContext.Cache[key]
, it's better to use the HttpContext.Cache.Add()
method, which lets you configure the storage parameters listed in Table 17-3.
Table 17-3. Parameters You Can Specify When Calling HttpContext.Cache.Add()
Type | Meaning | |
---|---|---|
|
| This lets you nominate one or more file names, or other cache item keys, upon which this item depends. When any of the files or cache items change, this item will be evicted from the cache. |
|
| This is a fixed point in time when the item will expire from the cache. It's usually specified relative to the current time (e.g., |
|
| If the cache item isn't accessed (i.e., retrieved from the cache collection) for a duration of at least this length, the item will expire from the cache. You can create |
|
| If the system is removing items from the cache as a result of memory pressure, it will remove items with a lower priority first. |
|
| This lets you nominate a callback function to receive notification when the item expires. You'll see an example of this shortly. |
As I mentioned earlier, Cache
is often used to cache the results of expensive method calls, such as certain database queries or web service calls. The drawback is of course that your cached data may become stale, which means that it might not reflect the most up-to-date results. It's up to you to make the appropriate trade-off when deciding what to cache and for how long.
For example, imagine that your web application occasionally makes HTTP requests to other web servers. It might do this to consume a REST web service, to retrieve RSS feeds, or simply to find out what logo Google is displaying today. Each such HTTP request to a third-party server might take several seconds to complete, during which time you'll be keeping your site visitor waiting for their response. Because this operation is so expensive—even if you run it as a background task using an asynchronous controller—it makes sense to cache its results.
You might choose to encapsulate this logic into a class called CachedWebRequestService
, implemented as follows:
public class CachedWebRequestService { private Cache cache; // The reasons for storing this will become apparent later private const string cacheKeyPrefix = "__cachedWebRequestService"; public CachedWebRequestService(Cache cache) {
this.cache = cache; } public string GetWebPage(string url) { string key = cacheKeyPrefix + url; // Compute a cache key string html = (string)cache[key]; // Try retrieving the value if (html == null) // Check if it's not in the cache { // Reconstruct the value by performing an actual HTTP request html = new WebClient().DownloadString(url); // Cache it cache.Insert(key, html, null, DateTime.MaxValue, TimeSpan.FromMinutes(15), CacheItemPriority.Normal, null); } return html; // Return the value retrieved or reconstructed } }
You can invoke this service from an action method by supplying HttpContext.Cache
as a constructor parameter:
public string Index() { var cwrs = new CachedWebRequestService(HttpContext.Cache); string httpResponse = cwrs.GetWebPage("http://www.example.com"); return string.Format("The example.com homepage is {0} characters long.", httpResponse.Length); }
There are two main points to note:
Whenever this code retrieves items from the Cache
collection, it checks whether the value retrieved is null
. This is important because items can be removed from Cache
at any moment, even before your suggested expiry criteria are met. The typical pattern to follow is (as demonstrated in the preceding example)
Compute a cache key.
Try retrieving the value under that key.
If you get null
, reconstruct the value and add it to the cache under that key.
Return the value you retrieved or reconstructed.
When you have multiple application components sharing the same Cache
(usually, your application has only one Cache
), make sure they don't generate clashing keys; otherwise, you'll have a lengthy debugging session on your hands. The easiest way to avoid clashes is to impose your own system of namespacing. In the previous example, all cache keys are prefixed by a special constant value that is certainly not going to coincide with any other application component.
What you've already seen is likely to be sufficient for most applications, but the framework offers a number of extra capabilities to do with dependencies:
You can set a cache item to expire when any one of a set of files (on disk) changes. This is useful if the cached object is simply an in-memory representation of that file on disk, so when the file on disk changes, you want to wipe out the cached copy from memory.
You can set up chains of cache entry dependencies. For example, when A expires, it causes B to expire too. This is useful if B has meaning only in relation to A.
This is a more advanced feature. You can set a cache item to expire when the results of a given SQL query change. For SQL Server 7 and SQL Server 2000 databases, this is achieved by a polling mechanism, but for SQL Server 2005 and later, it uses the database's built-in Service Broker to avoid the need for polling. If you want to use any of these features, you have lots of research to do—this is generally very difficult (for more information on the subject, a good place to start is Pro SQL Server 2008 Service Broker, by Klaus Aschenbrenner [Apress, 2008]).
Finally, you can specify a callback function to be invoked when a given cache entry expires—for example, to implement a custom cache item dependency system. Another reason to take action on expiration is if you want to recreate the expiring item on the fly. You might do this if it takes a while to recreate the item and you really don't want your next visitor to have to wait for it. Watch out, though; you're effectively setting up an infinite loop, so don't do this with a short expiration timeout.
Here's how to modify the preceding example to repopulate each cache entry as it expires:
public string GetWebPage(string url) { string key = cacheKeyPrefix + url; // Compute a cache key string html = (string)cache[key]; // Try retrieving the value if (html == null) // Check if it's not in the cache { // Reconstruct the value by performing an actual HTTP request html = new WebClient().DownloadString(url); // Cache it cache.Insert(key, html, null, DateTime.MaxValue,TimeSpan.FromMinutes(15), CacheItemPriority.Normal, OnItemRemoved);
} return html; // Return the value retrieved or reconstructed }void OnItemRemoved(string key, object value, CacheItemRemovedReason reason)
{
if (reason == CacheItemRemovedReason.Expired)
{
// Repopulate the cache
GetWebPage(key.Substring(cacheKeyPrefix.Length));
}
}
Note that the callback function gets called outside the context of any HTTP request. That means you can't access any Request
or Response
objects (there aren't any—not even via System.Web.HttpContext.Current
), nor can you produce any output visible to any visitor. The only reason the preceding code can still access Cache
is because it keeps its own reference to it.
Watch out for memory leaks! When your callback function is a method on an object instance (not a static method), you're effectively setting up a reference from the global Cache
object to the object holding the callback function. That means the garbage collector cannot remove that object, nor anything else in the object graph reachable from it. In the preceding example, CachedWebRequestService
only holds a reference to the shared Cache
object, so this is OK. However, if you held a reference to the original HttpContext
object, you'd be keeping many objects alive for no good reason.
Almost every web site needs a system of navigation, usually displayed as a navigation area at the top or left-hand side of every page. It's such a common requirement that ASP.NET 2.0 introduced the idea of site maps, which at its core is a standard API for describing and working with navigation hierarchies. There are two halves to it:
Configuring your site's navigation hierarchy, either as one or more XML files, or by implementing a custom SiteMapProvider
class. Once you've done this, the framework will keep track of where the visitor is in your navigation hierarchy.
Rendering a navigation UI, either by using the built-in navigation server controls, or by creating your own custom navigation controls that query the site maps API. The built-in controls will highlight a visitor's current location and even filter out links that they don't have authorization to visit.
Of course, you could add basic, static navigation links to your site's master page in just a few seconds by typing out literal HTML, but by using site maps you get easy configurability (your navigation structure will no doubt change several times during and after development), as well as the built-in facilities mentioned previously.
ASP.NET ships with three built-in navigation controls, listed in Table 17-4, that connect to your site maps configuration automatically. Unfortunately, only one works properly without the whole server-side form infrastructure used in ASP.NET Web Forms.
Table 17-4. Built-In Site Maps Server Controls
Control | Description | Usable in an MVC Application? |
---|---|---|
| Displays breadcrumb navigation, showing the visitor's current node in the navigation hierarchy, plus its ancestors | Yes |
| Displays a fixed hierarchical menu, highlighting the visitor's current position | No (it has to be placed in a |
| Displays a JavaScript-powered hierarchical flyout menu highlighting the visitor's current position | No (it has to be placed in a |
Considering that Menu
and TreeView
aren't usable, you'll probably want to implement your own custom MVC-compatible navigation HTML helpers that connect to the site maps API—you'll see an example shortly.
To get started using the default XmlSiteMapProvider
, right-click the root of your project and choose Add
If you want to put a site map somewhere else, or call it something different, you need to override XmlSiteMapProvider
's default settings in your Web.config
file. For example, add the following inside <system.web>
:
<siteMap defaultProvider="MyXmlSiteMapProvider" enabled="true"> <providers> <add name="MyXmlSiteMapProvider" type="System.Web.XmlSiteMapProvider" siteMapFile="~/Folder/MySiteMapFile.sitemap" /> </providers> </siteMap>
You can now fill in Web.sitemap
, describing your site's navigation structure using the standard site map XML schema—for example:
<?xml version="1.0" encoding="utf-8" ?> <siteMap xmlns="http://schemas.microsoft.com/AspNet/SiteMap-File-1.0" > <siteMapNode url="~/ " title="Home" description=""> <siteMapNode url="~/Home/About" title="About" description="All about us"/> <siteMapNode url="~/Home/Another" title="Something else"/> <siteMapNode url="http://www.example.com/" title="Example.com"/> </siteMapNode> </siteMap>
Next, put the built-in SiteMapPath
control in your master page:
<asp:SiteMapPath runat="server"/>
and it will display the visitor's current location in your navigation hierarchy (Figure 17-10).
Breadcrumb navigation is very nice, but you're likely to need some kind of menu too. It's quite easy to build a custom HTML helper that obtains navigation information using the SiteMap
class. For example, put the following class anywhere in your application:
public static class SiteMapHelpers { public static void RenderNavMenu(this HtmlHelper html) { HtmlTextWriter writer = new HtmlTextWriter(html.ViewContext.Writer); RenderRecursive(writer, SiteMap.RootNode); } private static void RenderRecursive(HtmlTextWriter writer, SiteMapNode node) { if (SiteMap.CurrentNode == node) // Highlight visitor's location writer.RenderBeginTag(HtmlTextWriterTag.B); // Render as bold text else { // Render as link writer.AddAttribute(HtmlTextWriterAttribute.Href, node.Url); writer.RenderBeginTag(HtmlTextWriterTag.A); } writer.Write(node.Title); writer.RenderEndTag(); // Render children if (node.ChildNodes.Count > 0)
{ writer.RenderBeginTag(HtmlTextWriterTag.Ul); foreach (SiteMapNode child in node.ChildNodes) { writer.RenderBeginTag(HtmlTextWriterTag.Li); RenderRecursive(writer, child); writer.RenderEndTag(); } writer.RenderEndTag(); } } }
RenderNavMenu()
is an extension method, so you'll only be able to use it in a particular master page or view after importing its namespace. So, add the following at the top of your master page or view:
<%@ Import Namespace="insert namespace containing SiteMapHelpers" %>
Now you can invoke the custom HTML helper as follows:
<% Html.RenderNavMenu(); %>
Depending on your site map configuration and the visitor's current location, this will render something like the following:
<a href="/">Home</a> <ul> <li><b>About</b></li> <li><a href="/Home/Another">Something else</a></li> <li><a href="http://www.example.com/">Example.com</a></li> </ul>
Of course, you can add any formatting, CSS, or client-side scripting of your choosing.
ASP.NET's default site map provider, XmlSiteMapProvider
, expects you to specify an explicit URL for each site map node. XmlSiteMapProvider
predates the routing system.
But in your ASP.NET MVC application, wouldn't it be better not to specify explicit URLs, and instead generate the URLs dynamically according to your routing configuration? Perhaps you'd like to replace your Web.sitemap
contents with the following:
<?xml version="1.0" encoding="utf-8" ?> <siteMap xmlns="http://schemas.microsoft.com/AspNet/SiteMap-File-1.0" > <siteMapNode title="Home" controller="Home" action="Index"> <siteMapNode title="About" controller="Home" action="About"/> <siteMapNode title="Log in" controller="Account" action="LogOn"/> </siteMapNode> </siteMap>
Notice that there are no URLs hard-coded into this configuration. This configuration won't work with the default XmlSiteMapProvider
, but you can make it work by creating a custom site map provider. Add the following class anywhere in your project:
public class RoutingSiteMapProvider : StaticSiteMapProvider { private SiteMapNode rootNode; public override void Initialize(string name, NameValueCollection attributes) { base.Initialize(name, attributes); // Load XML file, taking name from Web.config or use Web.sitemap as default var xmlDoc = new XmlDocument(); var siteMapFile = attributes["siteMapFile"] ?? "~/Web.sitemap"; xmlDoc.Load(HostingEnvironment.MapPath(siteMapFile)); var rootSiteMapNode = xmlDoc.DocumentElement["siteMapNode"]; // Build the navigation structure var httpContext = new HttpContextWrapper(HttpContext.Current); var requestContext = new RequestContext(httpContext, new RouteData()); rootNode = AddNodeRecursive(rootSiteMapNode, null, requestContext); } private static string[] reservedNames = new[] {"title","description","roles"}; private SiteMapNode AddNodeRecursive(XmlNode xmlNode, SiteMapNode parent, RequestContext context) { // Generate this node's URL by querying RouteTable.Routes var routeValues = (from XmlNode attrib in xmlNode.Attributes where !reservedNames.Contains(attrib.Name.ToLower()) select new { attrib.Name, attrib.Value }) .ToDictionary(x => x.Name, x => (object)x.Value); var routeDict = new RouteValueDictionary(routeValues); var url = RouteTable.Routes.GetVirtualPath(context, routeDict).VirtualPath; // Register this node and its children var title = xmlNode.Attributes["title"].Value; var node = new SiteMapNode(this, Guid.NewGuid().ToString(), url, title); base.AddNode(node, parent); foreach (XmlNode childNode in xmlNode.ChildNodes) AddNodeRecursive(childNode, node, context); return node; } // These methods are called by ASP.NET to fetch your site map data protected override SiteMapNode GetRootNodeCore() { return rootNode; } public override SiteMapNode BuildSiteMap() { return rootNode; } }
Enable your custom site map provider by adding the following inside Web.config
's <system.web>
node:
<siteMap defaultProvider="MyProvider">
<providers>
<clear/>
<add name="MyProvider" type="Namespace
.RoutingSiteMapProvider"/>
</providers>
</siteMap>
This took a bit more work than just using ASP.NET's built-in site map provider, but I think it was worth it. You can now define site map entries in terms of arbitrary routing data without hard-coding any URLs. Whenever your routing configuration changes, so will your navigation UI. You're not limited to specifying only controller
and action
in your site map file—you can specify any custom routing parameters, and the appropriate URLs will be generated according to your routing configuration.
The site maps feature offers a facility called security trimming. The idea is that each visitor should only see links to the parts of your site that they're authorized to access. To enable this feature, alter your custom site map provider registration as follows:
<siteMap defaultProvider="MyProvider"> <providers> <clear/><add name="MyProvider" type="Namespace.RoutingSiteMapProvider"
securityTrimmingEnabled="true"/>
</providers> </siteMap>
You can then control which nodes are accessible to each visitor by overriding the IsAccessibleToUser()
method on your custom site map provider:
public class RoutingSiteMapProvider : StaticSiteMapProvider
{
// Rest of class as before
public override bool IsAccessibleToUser(HttpContext context, SiteMapNode node)
{
if(node == rootNode) return true; // Root node must always be accessible
// Insert your custom logic here
}
}
The normal way to do this is to put an attribute called roles
on each <siteMapNode>
node, and then enhance RoutingSiteMapProvider
to detect this attribute value and use context.User.IsInRole()
to validate that the visitor is in at least one of the specified roles. You'll find this implemented in the downloadable code samples for this book.
If you're feeling ambitious, you might think you could avoid having to configure roles, and instead run the authorization filters on the target action to determine at runtime whether the visitor will be allowed to visit each site map node. This might technically be possible, but it would be very difficult to account for all the ways you could customize how controllers are selected, how action methods are selected, how filters are located, and how authorization filters determine who can access a given action. You would also need to cache this information appropriately, because it would be too expensive to keep recalculating it on each request.
Don't forget that security trimming only hides navigation menu links as a convenience—it doesn't actually prevent a visitor from requesting those URLs. Your site isn't really secure unless you actually enforce access restrictions by applying authorization filters.
Developing multilingual applications is always difficult, but the .NET Framework offers a number of services designed to ease the burden:
The System.Globalization
namespace provides various services related to globalization, such as the CultureInfo
class, which can format dates and numbers for different languages and cultures.
Every .NET thread keeps track of both its CurrentCulture
(a CultureInfo
object that determines various formatting and sorting settings) and its CurrentUICulture
(a CultureInfo
object that indicates which language should be used for UI text).
Various string-formatting methods respect the thread's CurrentCulture
when rendering dates, numbers, and currencies.
Visual Studio has a built-in resource editor that makes it straightforward to manage translations of strings into different languages. During development, you can access these resource strings with IntelliSense because Visual Studio generates a class with a separate property for each resource string. At runtime, those properties call System.Resources.ResourceManager
to return the translation corresponding to the current thread's CurrentUICulture
.
ASP.NET Web Forms has additional internationalization features, both of which you can technically still use in an MVC application:
If you mark an ASPX <%@ Page %>
declaration with Culture="auto " UICulture="auto"
, the platform will inspect incoming requests for an Accept-Language
header, and then assign the appropriate CurrentCulture
and CurrentUICulture
values (falling back on your application's default culture if the browser doesn't specify one).
You can bind server controls to your resource strings using the syntax <asp:Label runat="server" Text="<%$ resources:YourDateOfBirth %>"/>
.
In an ASP.NET MVC application, you won't usually want to use either of those last two features. MVC views are easier to build with HTML helper methods than Web Forms-style server controls are, so the <%$ ... %>
syntax is rarely applicable. Also, <%@ Page %>
declarations don't take effect until a view is being rendered, which is too late if you want to take account of the visitor's requested culture during an action method. You'll learn about better alternatives in a moment.
It's very easy to get started with localizing text in your MVC application. Right-click your project in Solution Explorer and choose Add
The values given here (in Resources.resx
) will be the application's defaults. You will of course want to support another language, so create a similar resource file with the same name, except with the designation of a culture inserted into the middle (e.g., Resources.en-GB.resx
or Resources.fr-FR.resx
). Figure 17-12 shows my Resources.en-GB.resx
file.
Now, when you first saved Resources.resx
, a Visual Studio custom tool sprang to life and created a C# class in the file Resources.Designer.cs
. Among other things, the generated class contains a static property corresponding to each resource string—for example:
/// <summary> /// Looks up a localized string similar to the President /// </summary> internal static string TheRuler { get { return ResourceManager.GetString("TheRuler", resourceCulture); } }
This is almost exactly what you want. The only problem is that the autogenerated class and its properties are all marked as internal
, which makes them inaccessible from your ASPX views (which compile as one or more separate assemblies). To resolve this, go back to Resources.resx
and set its access modifier to public
, as shown in Figure 17-13.
Now you can reference your resource strings in a strongly typed, IntelliSense-assisted way in your MVC views, as shown in Figure 17-14.
At runtime, ResourceManager
will retrieve whatever value corresponds to the thread's CurrentUICulture
. But how is this culture determined? By default, it's taken from your server's Windows settings, but a common requirement is to vary the culture for each visitor, inspecting the incoming Accept-Language
header to determine their preferences.
One way to achieve this, which works perfectly well if you are only interested in the visitor's preferred culture while rendering ASPX views, is to add UICulture="auto"
to your view's <%@ Page %>
directive. That's not so useful if you might ever want to account for the visitor's culture during action methods or when rendering views using other view engines, so it's possibly better to add the following to your Global.asax.cs
file:
protected void Application_BeginRequest(object sender, EventArgs e) { // Uses Web Forms code to apply "auto" culture to current thread and deal with // invalid culture requests automatically using(var fakePage = new Page()) { var ignored = fakePage.Server; // Work around a Web Forms quirk fakePage.Culture = "auto"; // Apply local formatting to this thread fakePage.UICulture = "auto"; // Apply local language to this thread } }
If you prefer, you can inspect the incoming Accept-Language
header values manually using Request.UserLanguages
, but beware that clients might request unexpected or invalid culture settings. The previous example shows how, instead of parsing the header and detecting invalid culture requests manually, you can leverage the existing logic on Web Forms' Page
class.
So now, depending on which language the visitor has configured in their browser, they'll see either one of the following (Figure 17-15).
The right-hand output corresponds to the browser language setting en-GB
, and the left-hand output corresponds to anything else. The date and currency were formatted using Date.ToShortDateString()
and string.Format("{0:c}", 1)
, respectively.
For all but the tiniest applications, you'll benefit from keeping your resources in a separate assembly. That makes it easier to manage in the long run, and means you can reference it from your other projects if needed.
To do this, create a new class library project, right-click it, and choose Add
There's one other trick worth considering. When you're editing MVC views all day long, you'll get tired of writing out MyResourcesProject
.Resources.Something
, so add the following global namespace registration to your Web.config
file, and then you can just write Resources.Something
:
<system.web>
<pages>
<namespaces>
<add namespace="MyResourcesProject"/>
</namespaces>
</pages>
</system.web>
Of course, in most real localization scenarios, you'll want to localize entire phrases into totally different languages, not just individual words into different dialects. Within those phrases, you'll often need to inject other strings that come from your database or were entered by the user.
The usual solution is to combine the framework's localization features with string.Format()
, using numbered placeholders, and the resource editor's Comment feature so your translation staff knows what each placeholder represents. For example, your default resource file might contain the placeholders shown in Figure 17-16.
Based on this, your translation staff can produce the Spanish resource file shown in Figure 17-17.
Then you can render a localized string from an view, as follows:
<%: string.Format(Resources.UserUpdated, ViewData["UserName"], DateTime.Now) %>
This renders the following by default:
The user "Bob" was updated at 1:46 PM
But for Spanish-speaking visitors, it renders this:
(13:46) El usuario "Bob" ha sido actualizado
Note how easy it is to vary sentence structures and even use different formatting styles. Complete phrases can be translated far more cleanly than individual sentence fragments such as "was updated at."
If internationalization is an important feature in your application, there are other topics you might want to consider, such as designing for right-to-left languages and handling non-Gregorian calendars. For more details, see .NET Internationalization, by Guy Smith-Ferrier (Addison-Wesley, 2006).
As you learned in Chapter 12, ASP.NET MVC has extensive support for client-side and server-side validation. You can express rules using Data Annotations attributes or implement your own custom validation provider. This brings up the question of how to globalize your validation rules (e.g., so that different cultures' date formats are respected) and how to localize validation error messages into different languages.
For server-side validation and model binding, ASP.NET MVC doesn't have or need any special support for globalization. When the .NET Framework parses numbers and dates, it automatically respects your thread's CurrentCulture
value. For example, in en-GB
mode, the value 30/05/2010
can successfully be parsed as a date, whereas the same value would trigger a validation error in en-US
mode.
It's a little different for client-side validation, because JavaScript doesn't know about your web server's culture settings. By default, MicrosoftMvcValidation.js
contains five client-side validation rule types:
required
, which is independent of culture
stringLength
, which is independent of culture
regularExpression
, which is independent of culture
number
, which by default assumes en-US
number-parsing rules
range
, which by default assumes en-US
number-parsing rules
As you can see, the only client-side validation behavior affected by culture is number parsing. If your server-side culture uses different number-parsing rules than en-US
culture, you'll need to take steps to make your client-side validation consistent with it. Otherwise, you could be in the odd situation where client-side validation interprets 1,234
as "one thousand, two hundred thirty-four," whereas server-side validation interprets it as "one point two-three-four."
To change the client-side number-parsing behavior, you can set properties on a JavaScript object called Sys.CultureInfo.CurrentCulture.numberFormat
. This will only exist after your script reference to MicrosoftAjax.js
. For example, you could change its parsing behavior to match Spanish (es-ES
) culture as follows:
<script type="text/javascript"> // Note: this must go *after* your script reference to MicrosoftAjax.js var numberFormat = Sys.CultureInfo.CurrentCulture.numberFormat; numberFormat.NegativeSign = "-"; numberFormat.PositiveSign = "+"; numberFormat.NumberDecimalSeparator = ","; numberFormat.NumberGroupSeparator = "."; numberFormat.NumberNegativePattern = 1; </script>
The five properties I've shown here (NegativeSign, PositiveSign, NumberDecimalSeparator, NumberGroupSeparator
, and NumberNegativePattern
) are the only ones that matter. Their meanings are all obvious, with the exception of NumberNegativePattern
, which means that positive and negative values should be expressed as shown in Table 17-5.
Table 17-5. Options for Configuring NumberNegativePattern
NumberNegativePattern Value | Example Positive Number | Example Negative Number |
---|---|---|
0 | 123 | (123) |
1 | +123 | −123 |
2 | + 123 | − 123 |
3 | 123+ | 123− |
4 | 123 + | 123 − |
Rather than manually altering values on Sys.CultureInfo.CurrentCulture.numberFormat
, an alternative way to configure client-side validation globalization rules is to use the helper method Ajax.GlobalizationScript()
. This simply emits a <script>
tag to reference an external JavaScript file that should provide the globalization rules for your chosen culture.
Before you can use this, you have to configure the location of these external JavaScript files. The easiest option is to reference the files on Microsoft's CDN.[118] To do this, configure their location in your Global.asax.cs
file as follows:
protected void Application_Start() {AjaxHelper.GlobalizationScriptPath =
"http://ajax.microsoft.com/ajax/4.0/1/globalization/";
// Leave the rest of this method unchanged }
Next, call Ajax.GlobalizationScript()
before your reference to MicrosoftAjax.js
—for example:
<%: Ajax.GlobalizationScript() %>
<script src="<%: Url.Content("~/Scripts/MicrosoftAjax.js") %>"
type="text/javascript"></script>
If you want, you can explicitly pass a cultureInfo
parameter to Ajax.GlobalizationScript()
; otherwise, it will use your thread's current culture by default. The preceding view code will produce output similar to the following:
<script type="text/javascript"
src="http://ajax.microsoft.com/ajax/4.0/1/globalization/es-ES.js"></script>
<script src="/Scripts/MicrosoftAjax.js" type="text/javascript"></script>
The first of those two JavaScript files will cause Sys.CultureInfo.CurrentCulture
to follow es-ES
culture number-parsing rules.
The next consideration is how to display messages such as "This field is required" in different languages. If you've created a custom validation provider, it's up to you to implement your own mechanism for supplying localized messages. If you're using Data Annotations attributes, you can use their ErrorMessageResourceType
and ErrorMessageResourceName
properties to load messages from a resource file matching the thread's UI culture.
For example, create a resource file called ValidationMessages.resx
anywhere in your project. Add resource strings such as those shown in Figure 17-18.
Next, refer to these resource strings from your models' Data Annotations attributes as follows:
[Required(ErrorMessageResourceType = typeof(ValidationMessages),ErrorMessageResourceName = "Required")]
[RegularExpression(@".+@.+..+",ErrorMessageResourceType = typeof(ValidationMessages),
ErrorMessageResourceName = "EmailAddress")]
public string ContactEmail { get; set; }
Now the framework will use your resource strings to supply messages for both server-side and client-side validation.
To support multiple languages, simply create additional resource files for each culture you wish to support. For example, to support Spanish, create a resource file called ValidationMessages.es-ES.resx
, containing the same string names (in this example, that's Required
and EmailAddress
) along with the Spanish translations. The framework will automatically use these translations whenever the thread's UI culture equals es-ES
.
As mentioned in Chapter 12, for all model properties of numeric types (int, byte, decimal, ulong
, etc.), the MVC Framework automatically emits a client-side validation rule to ensure that only numeric values may be entered. This is implemented by a built-in model validator provider called ClientDataTypeModelValidatorProvider
. Unfortunately, this model validator provider doesn't have any concept of localization, so it will always generate the message "The field fieldName must be a number," with no way to customize this.
If this causes a problem for you, one possible solution is to remove ClientDataTypeModelValidatorProvider
and replace it with your own implementation that obtains a localized message from your own resource files. To do this, create a resource file called ValidationMessages.resx
if you don't already have one, and then add to it a resource string called MustBeNumber
, containing text similar to "The field {0} must be a number."
Next, add the following code to your ASP.NET MVC project:
public class ClientNumberValidatorProvider : ClientDataTypeModelValidatorProvider { public override IEnumerable<ModelValidator> GetValidators(ModelMetadata metadata, ControllerContext context) { bool isNumericField = base.GetValidators(metadata, context).Any(); if (isNumericField) yield return new ClientSideNumberValidator(metadata, context); } } public class ClientSideNumberValidator : ModelValidator { public ClientSideNumberValidator(ModelMetadata metadata, ControllerContext controllerContext) : base(metadata, controllerContext) { } public override IEnumerable<ModelValidationResult> Validate(object container) { yield break; // Do nothing for server-side validation } public override IEnumerable<ModelClientValidationRule> GetClientValidationRules() { yield return new ModelClientValidationRule { ValidationType = "number", ErrorMessage = string.Format(CultureInfo.CurrentCulture, ValidationMessages.MustBeNumber, Metadata.GetDisplayName()) }; } }
This code inherits the logic from ClientDataTypeModelValidatorProvider
to determine whether a given property is numeric. For properties that are numeric, it simply emits a ModelClientValidationRule
containing an instruction to validate the property as a number. As you can see from ClientSideNumberValidator
's GetClientValidationRules()
method, it uses the MustBeNumber
resource string from your ValidationMessages.resx
resource file (or whichever resource file is active, considering the thread's UI culture).
Finally, configure ASP.NET MVC to use this instead of its default ClientDataTypeModelValidatorProvider
by updating Global.asax.cs
as follows:
protected void Application_Start() { // Leave the rest of this method unchangedvar existingProvider = ModelValidatorProviders.Providers
.Single(x => x is ClientDataTypeModelValidatorProvider);
ModelValidatorProviders.Providers.Remove(existingProvider);
ModelValidatorProviders.Providers.Add(new ClientNumberValidatorProvider());
}
In the remainder of this chapter, you'll learn a few techniques to improve, monitor, and measure the performance of an ASP.NET MVC application. All of them are applications of core ASP.NET platform features.
By default, the MVC Framework sends response data to browsers in a plain, uncompressed format. For example, textual data (e.g., HTML) is typically sent as a UTF-8 byte stream: it's more efficient than UTF-16, but nowhere near as tightly packed as it could be. Yet almost all modern browsers are happy to receive data in a compressed format, and they advertise this capability by sending an Accept-Encoding
header with each request. For example, both Firefox 3 and Internet Explorer 7 send the following HTTP header:
Accept-Encoding: gzip, deflate
This means they're happy to accept either of the two main HTTP compression algorithms, gzip and deflate. In response, you use the Content-Encoding
header to describe which, if any, of those algorithms you've chosen to use, and then compress the HTTP payload (which itself may still be UTF-8 or anything else) with that algorithm.
The .NET Framework's System.IO.Compression
namespace contains ready-made implementations of both gzip and deflate compression algorithms, so it's very easy to implement the whole thing as a small action filter:
using System.IO; using System.IO.Compression; public class EnableCompressionAttribute : ActionFilterAttribute { const CompressionMode compress = CompressionMode.Compress; public override void OnActionExecuting(ActionExecutingContext filterContext) { HttpRequestBase request = filterContext.HttpContext.Request; HttpResponseBase response = filterContext.HttpContext.Response;
string acceptEncoding = request.Headers["Accept-Encoding"]; if (acceptEncoding == null) return; if (acceptEncoding.ToLower().Contains("gzip")) { response.Filter = new GZipStream(response.Filter, compress); response.AppendHeader("Content-Encoding", "gzip"); } else if (acceptEncoding.ToLower().Contains("deflate")) { response.Filter = new DeflateStream(response.Filter, compress); response.AppendHeader("Content-Encoding", "deflate"); } } }
In this example, the filter chooses gzip if the browser supports it, and otherwise falls back on deflate. Now, once you've decorated one or more action methods or controllers with the [EnableCompression]
attribute, you'll see a considerable reduction in bandwidth usage. For example, this action method:
[EnableCompression] public void Index() { // Output a lot of data for (int i = 0; i < 10000; i++) Response.Write("Hello " + i + "<br/>"); }
would naturally result in a 149 KB payload,[119] but that's reduced to 34 KB because of [EnableCompression]
—a savings of over 75 percent. You might expect that real-world data wouldn't compress so well, but in fact, a study of 25 major web sites found that HTTP compression yielded average bandwidth savings of 75 percent.[120]
Compression saves on bandwidth, so pages load faster and users are happier. Plus, depending on your hosting scenario, bandwidth saved might equal money saved. But bear in mind that compression costs CPU time. What's more valuable to you, reduced CPU load or reduced bandwidth use? It's up to you to make a decision for your application—you might choose to enable compression only for certain actions methods. If you combine it with output caching, you can have both low bandwidth and low CPU usage; the cost switches to memory.
Don't forget that HTTP compression is only really useful for textual data. Binary data, such as graphics, is usually already compressed. You will not benefit by wrapping gzip compression around existing JPEG compression; you will just burn CPU cycles for nothing.
IIS 6 and later can be configured to compress HTTP responses, either for static content (i.e., files served directly from disk) or for dynamic content (e.g., the output from your ASP.NET MVC application). Unfortunately, it's quite difficult to configure (on IIS 6, you have to edit the metabase directly, which might not be an option in some deployment scenarios), and of course it doesn't give you the fidelity of enabling or disabling it for individual action methods.
Even though it usually makes more business sense to optimize your application for maintainability and extensibility rather than for sheer performance (servers are cheaper than developers), there's still great value in keeping an eye on some carefully chosen performance metrics as you code.
That action method of yours used to run in 0.002 seconds, but after your recent amendment, it now takes 0.2 seconds. Did you realize? This factor-of-100 difference could be critical when the application is under production loads. And you assumed a certain action method ran 1 or 2 database queries, but sometimes it runs 50—not obvious during development; critical when live.
Dedicated load testing is useful, but by that stage you've written the code and perhaps built more code on top of it. If you could spot major performance issues earlier, you'd save a lot of effort. Fortunately, each part of your application stack offers tools to help you keep track of what's happening behind the scenes:
ASP.NET has a built-in tracing feature that appends (a vast number of) request processing statistics to the end of each page generated, as shown in Figure 17-19. Unfortunately, it's mainly intended for classic ASP.NET Web Forms applications—most of the timing information is presented in terms of server controls and page life cycle events.
You can enable tracing by adding the following to your Web.config
file, inside <system.web>
:
<trace enabled="true" pageOutput="true"/>
Also, ASP.NET's health monitoring feature lets you log or otherwise take action each time the application starts or shuts down, each time a request is processed, and on each heartbeat event (a heartbeat confirms that the application is responsive). To find out more about health monitoring, read its MSDN page at http://msdn.microsoft.com/en-us/library/ms998306.aspx
.
IIS, like most web servers, will create a log of HTTP requests, showing the time taken to service each.
SQL Server's Profiler, when running, logs all database queries and shows execution statistics.
Windows itself has built-in performance monitoring: perfmon
will log and graph your CPU utilization, memory consumption, disk activity, network throughput, and far more. It even has special facilities for monitoring ASP.NET applications, including the number of application restarts, .NET exceptions, requests processed, and so on.
There are so many possibilities here; you must be able to get the information you need . . . somehow. However, it isn't always obvious how to get only the most pertinent information, and how to keep those key metrics effortlessly visible as an ongoing development consideration (and how to encourage your coworkers to do the same).
For a quick-and-easy way to keep track of performance characteristics, you can create a custom HTTP module that appends performance statistics to the bottom of each page generated. An HTTP module is just a .NET class implementing IHttpModule
—you can put it anywhere in your solution. Here's an example that uses .NET's built-in high-resolution timer class, System.Diagnostics.Stopwatch
:
public class PerformanceMonitorModule : IHttpModule { public void Dispose() { /* Nothing to do */ } public void Init(HttpApplication context) { context.PreRequestHandlerExecute += delegate(object sender, EventArgs e) { HttpContext requestContext = ((HttpApplication)sender).Context; Stopwatch timer = new Stopwatch(); requestContext.Items["Timer"] = timer; timer.Start(); }; context.PostRequestHandlerExecute += delegate(object sender, EventArgs e) { HttpContext requestContext = ((HttpApplication)sender).Context; Stopwatch timer = (Stopwatch)requestContext.Items["Timer"]; timer.Stop(); // Don't interfere with non-HTML responses
if (requestContext.Response.ContentType == "text/html") { double seconds = (double)timer.ElapsedTicks / Stopwatch.Frequency; string result = string.Format("{0:F4} sec ({1:F0} req/sec)", seconds, 1 / seconds); requestContext.Response.Write("<hr/>Time taken: " + result); } }; } }
IHttpModule
classes have to be registered in your application's Web.config
file, via a node like this:
<add name="PerfModule" type="Namespace
.PerformanceMonitorModule,AssemblyName
"/>
For IIS 5/6, and for the Visual Studio built-in web server, add it to the system.web/httpModules
section. For IIS 7.x, add it to the system.webServer/modules
section (or use IIS 7.x's Modules GUI, which edits Web.config
on your behalf).
Once you have PerformanceMonitorModule
registered, you'll start seeing performance statistics, as shown in Figure 17-20.
That statistic alone is a key performance indicator. By building it into your application, you automatically share the insight with all other developers on your team. When you deploy to your production servers, just remove (or comment out) the module from your Web.config
file.
Besides page generation time, the most important performance statistics usually relate to database access. That's because you can probably issue 100 queries to your own personal SQL Server instance in mere milliseconds, but if your production server tried to do the same for 100 concurrent clients, you'd be in trouble.
Also, if you're using an ORM tool such as LINQ to SQL, NHibernate, or Entity Framework, don't lose touch with reality. Even though you don't write much SQL yourself, there's still a whole lot of SQL going on under the surface. But how many queries happen, and are they well optimized? Do you have the famous SELECT N+1 problem?[121] How will you know?
One option is to use SQL Server's Profiler tool: it displays every query in real time. However, that means you have to run SQL Profiler, and you have to keep remembering to look at it. And even if you do have a special monitor dedicated to SQL Profiler, it's still hard to work out which queries relate to which HTTP request. Fortunately, LINQ to SQL does its own internal query logging, so you can write an HTTP module to show the queries that were invoked during each request. This is much more convenient.
Add the following class to your solution:
public class SqlPerformanceMonitorModule : IHttpModule { static string[] QuerySeparator = new string[] { Environment.NewLine + Environment.NewLine }; public void Init(HttpApplication context) { context.PreRequestHandlerExecute += delegate(object sender, EventArgs e) { // Set up a new empty log HttpContext httpContext = ((HttpApplication)sender).Context; httpContext.Items["linqToSqlLog"] = new StringWriter(); }; context.PostRequestHandlerExecute += delegate(object sender, EventArgs e) { HttpContext httpContext = ((HttpApplication)sender).Context; HttpResponse response = httpContext.Response; // Don't interfere with non-HTML responses if (response.ContentType == "text/html") { var log = (StringWriter)httpContext.Items["linqToSqlLog"]; var queries = log.ToString().Split(QuerySeparator, StringSplitOptions.RemoveEmptyEntries); RenderQueriesToResponse(response, queries); } }; } void RenderQueriesToResponse(HttpResponse response, string[] queries) { response.Write("<div class='PerformanceMonitor'>"); response.Write(string.Format("<b>Executed {0} SQL {1}</b>",
queries.Length, queries.Length == 1 ? "query" : "queries")); response.Write("<ol>"); foreach (var entry in queries) response.Write(string.Format("<li>{0}</li>", Regex.Replace(entry, "(FROM|WHERE|--)", "<br/>$1"))); response.Write("</ol>"); response.Write("</div>"); } public void Dispose() { /* Not needed */ } }
As usual, you need to register the HTTP module in your Web.config
file, either under system.web/httpModules
for IIS 5/6 and for the Visual Studio built-in web server, or under system.webServer
/modules
for IIS 7.x. Here's the syntax:
<add name="SqlPerf" type="Namespace
.SqlPerformanceMonitorModule,AssemblyName
"/>
This HTTP module starts each request by creating a new StringWriter
object and storing it in the current HTTP context's Items
collection. Later, at the end of the request, it retrieves that StringWriter
, parses out SQL query data that has been inserted into it in the meantime, makes a vague effort to format it nicely by inserting line breaks and HTML tags, and injects it into the response stream.
That's great, but LINQ to SQL doesn't know anything about it, so it's not going to tell it about any queries. You can rectify this by hooking into your LINQ to SQL DataContext
class's OnCreated()
partial method. The way to do this depends on how you originally created your DataContext
class:
If you originally created your DataContext
class as a .dbml
file (by asking Visual Studio to create a new LINQ to SQL Classes file), then go to that file in the visual designer, and then choose View
public partial class ExampleDataContext { // Leave rest of class unchangedpartial void OnCreated()
{
var context = HttpContext.Current;
if (context != null)
this.Log = (StringWriter)context.Items["linqToSqlLog"];
}
}
If you originally created your DataContext
class manually, as you did in the SportsStore example, simply assign the log object to its Log
property:
var dc = new DataContext(connectionString);
dc.Log = (StringWriter) HttpContext.Items["linqToSqlLog"];
var productsTable = dc.GetTable<Product>();
This means that each time a data context is created, it will find the StringWriter
that was created by SqlPerformanceMonitorModule
, and use it as a log for any queries issued. If you have more than one DataContext
class, hook them all up the same way.
The result of this is shown in Figure 17-21.
If you're new to LINQ to SQL and you don't know how efficiently you're using it, then having this much clarity about what's happening is essential. And if you have developers on your team who don't trust ORM tools because of performance fears, show this to them and see if it helps to change their mind.
The idea with IHttpModule
s is that you can use any combination of them at once. So, you could use SqlPerformanceMonitorModule
concurrently with PerformanceMonitorModule
to monitor both SQL queries and page generation times. Just don't forget to remove them from your Web.config
file when you deploy to your production server—unless you actually want to display that information to the public.
In this chapter, you saw the most commonly used ready-made application components provided by the core ASP.NET platform, and how to use them in an MVC application. If you're able to use any of these, rather than inventing your own equivalent, you may save yourself weeks of work.
In the final chapter, you'll consider techniques for taking existing applications—built either with ASP.NET Web Forms or ASP.NET MVC 1—and migrating them to ASP.NET MVC 2. Plus, I'll show how you can combine MVC and Web Forms in the same application to take advantage of the strengths of both platforms.
[111] In Firefox 3.5, go to Tools
[112] To make Forms Authentication work on a web farm, you either need client/server affinity, or you need to make sure all your servers have the same explicitly defined <machineKey>
value. You can generate a random one at http://aspnetresources.com/tools/keycreator.aspx
.
[113] Rainbow tables are huge databases containing precomputed hash values for trillions of possible passwords. An attacker can quickly check whether your hash value is in the table, and if so, they have the corresponding password. There are various rainbow tables that you can freely query online. Or there's my favorite attack on unsalted MD5 or SHA1 hashes: just put the hash value into Google. If the password was a dictionary word, you'll probably figure it out pretty quickly.
By adding an arbitrary extra value (salt) into the hash, even without keeping the salt value secret, the hash becomes far harder to reverse. An attacker would have to compute a brand-new rainbow table using that particular salt value in all the hashes. Rainbow tables take a vast amount of time and computing horsepower to generate.
[114] For example, WindowsMicrosoft.NETFrameworkv4.0.30319
. If you're targeting .NET 3.5, replace the version number with v2.0.50727
. And if you're running in 64-bit mode, replace Framework
with Framework64
.
[115] If you're not using SqlMembershipProvider
, technically you could still use SqlRoleProvider
, but you probably wouldn't want to: it depends on the same database schema as SqlMembershipProvider
.
[116] Every time you write a change to Web.config
, it recycles the application process. Also, for it even to be possible to write changes to Web.config
, your ASP.NET worker process obviously needs write access to that file. You may prefer not to give your worker processes that much power.
[117] If you want to follow ASP.NET folder conventions, create the special ASP.NET folder App_GlobalResources
, and put your resource file in there (although you don't have to do this).
[118] Alternatively, you can host these JavaScript files locally, but first you will have to obtain them somehow—perhaps by downloading them manually from Microsoft's CDN.
[119] You can find out the download size of your page by opening it in Firefox 3. Right-click the page and choose View Page Info. It's on the General tab, captioned "Size." After enabling or disabling compression, reload your page in Firefox using Ctrl+F5 (not just F5) to see it take effect. However, don't pay attention to what Internet Explorer says (when you right-click a page and choose Properties)—it always displays the page size after decompression.
[120] King, Andrew. Speed Up Your Site: Web Site Optimization. New Riders Press, 2003 (www.websiteoptimization.com/speed/18/18-2t.html
).
[121] SELECT N+1 refers to the scenario where an ORM tool loads a list of N objects (that's one query), and then for each object in the list, does a separate query to load some linked object (that's N more queries). Of course, issuing so many queries is highly undesirable. The solution is to configure an eager loading strategy so that all of those linked objects are joined into the original query, reducing the whole loading process to a single SQL query. LINQ to SQL supports this through a notion called DataLoadOptions
.
3.128.198.170