Chapter 6. Designing Application Center

<feature><title>In This Chapter</title> </feature>

Technology Capabilities

Application Center is designed to provide an integrated solution for Web server management. That means Application Center provides just about every function you might need for administering and maintaining Web servers, but doesn’t provide any functionality for maintaining the actual content of a Web site.

Note

Products like Content Management Server and Commerce Server provide functionality for building and maintaining Web site content.

Much of the functionality in Application Center was originally included in other Microsoft products, such as Site Server. For that reason, Application Center’s capabilities can be easily divided into functional areas that correspond with those inherited features: Network Load Balancing, Component Load Balancing, content deployment and synchronization, and health and performance monitoring. In the next five sections, I’ll introduce you to each of these functional areas and describe how they work.

Note

For more information on the products that led up to Application Center, seeApplication Center,” p. 700

Network Load Balancing

Application Center includes the same Network Load Balancing (NLB) technology that is included with Windows 2000 Advanced Server and all editions of Windows .NET Server. However, because Windows 2000 Advanced Server is rarely used on Web server computers, Application Center allows NLB to be used on Windows 2000 Server.

Application Center enables you to create Web clusters, which are groups of independent servers all running Application Center. Each server in the cluster (referred to as a cluster member) must have two network adapters. NLB only affects one of these adapters, which is referred to as the load balanced adapter. The members of an NLB cluster aren’t connected to one another in any way other than their network connections, and the members can even run on completely different types of hardware.

Theory of Operation

NLB works by enabling you to specify a virtual IP address. This virtual IP address works just like a real IP address, except that it doesn’t represent an individual computer. Instead, the virtual IP address represents the entire cluster. Any network traffic sent to the virtual IP address will be handled by NLB. NLB also creates a virtual MAC address, and reprograms the load balanced adapter on each cluster member to use that MAC address rather than its own. The effect of this reprogramming is that all cluster members can see all of the traffic sent to the virtual IP address. NLB then decides which server will respond to each incoming request.

Note

NLB doesn’t actually reprogram the MAC address that is burned into a network adapter by its manufacturer; instead, NLB commands the network adapter’s driver to use the virtual MAC address instead of the one burned into the physical adapter. That means your network adapter drivers must support software MAC address reprogramming. Most network cards listed on the Windows Hardware Compatibility List (HCL) provide this capability.

Every second or so, NLB builds a routing list by querying each cluster member for information on how busy it is at the moment. The routing list is simply a list of all cluster members, with the least-busy member at the top, then the next-least-busy member, and so forth. The routing list also takes into account a server weight, which you can configure to represent computers which are physically capable of handling more traffic than others. For example, you might configure your oldest Web servers with a weight of 10, while configuring your brand-new Web server—which is twice as powerful as the older ones—with a weight of 20. Weights ensure that incoming traffic will be load balanced according to each server’s relative capabilities. The routing list also accounts for servers which are unresponsive, or which have been temporarily removed from NLB routing for maintenance purposes. These servers are left off of the routing list, ensuring that they won’t be expected to handle any incoming client traffic.

Each cluster member maintains a copy of the so-called routing list. When traffic comes into the cluster, each cluster member sees the traffic, but only the server currently designated as the least busy actually responds to the traffic. Since the routing list is updated constantly, the least busy server will almost always be the one to handle new incoming traffic.

While NLB provides very responsive load balancing capabilities, it can actually cause problems in some cases. The most common problem is caused by ASP session variables. A session variable is a software programming object that Web developers can use to store information about a Web user in the server’s memory. Session variables make it easier to write dynamic, interactive Web applications, but they store information entirely in the server’s memory. By default, NLB ignores session variables, which can cause an inconsistent user experience for Web users. To see why, I’ll walk you through how NLB works on a Web site that uses session variables:

  1. A new user accesses the Web site, and NLB determines that ServerA will handle the connection.

  2. ServerA executes an ASP page, which sets the contents of a session variable. Those contents are stored in ServerA’s local memory.

  3. The user clicks on a link to go to a new page. This time, NLB determines that the request will be handled by ServerB.

  4. ServerB has never handled this user before, and doesn’t have their information in its memory. That means the session variable on ServerB is empty. The result is that the user sees a page as if they were visiting the Web site for the first time, when in fact they’ve already connected to it once.

In a real-world context, imagine that you’re visiting an e-commerce Web site. You click on an item to add it to your shopping cart. However, when your shopping cart is displayed, it’s empty. The cause might be that the first server you connected to, which accepted your click to add a product, is a different server than the one that displayed your shopping cart. Because the two servers have no way to share session variables, you have a confusing shopping experience to deal with.

The best solution, of course, is to not use session variables at all. Experienced Web developers will instead store user information in a single back-end database, which is equally accessible to all of the Web servers in a cluster. That way, no matter which server a user connects to, their user data will be available, creating a consistent experience. Unfortunately, if your Web site already uses ASP session variables, converting them to another storage method can be complex. For that reason, NLB contains a feature called client affinity, which enables you to partially defeat load balancing to maintain session variable compatibility. Client affinity can be set to three modes, including “off.” In single client mode, NLB ensures that the same server always handles requests from a particular client IP address. The first request from a new IP address is load balanced normally, while all subsequent requests are handled by the same server that handled the first request. Although this technique partially defeats load balancing by only balancing the first request, it ensures that the server containing a user’s session variable data will be the one that user connects to.

Unfortunately, single client mode only works well in intranet environments, where a relatively small number of IP addresses are in use, and where each client accesses the Web cluster directly. On the Internet, clients are far more likely to access the Web cluster through a corporate or ISP firewall, which probably performs Network Address Translation (NAT). NAT allows a large number of clients to access the Internet by sharing a small number of IP addresses. The firewall (or other network device) translates the client’s private IP addresses into a small pool of public IP addresses. The effect of NAT is that thousands of clients can access your Web site, yet appear to only be using a handful of IP addresses. Even worse, NAT may select different translation IP addresses each time a client sends a request to the Internet. So, on the first connection to your Web site, a client may appear to be using the IP address 64.12.34.5; a subsequent connection from the same client might appear to be coming from 64.12.34.6. NAT defeats NLB’s single client affinity mode, because a single user may appear as multiple IP addresses. NLB’s solution is Class C affinity, which uses a single Web cluster member to handle the requests from an entire range of about 250 IP addresses.

The most obvious downside to Class C affinity is load. Suppose, for example, that an America Online user accesses your Web site. NLB might load balance the request to one of your least-powerful servers, if it happens to be the least busy server at the time. With Class C affinity turned on, that poor server will wind up handling the majority of traffic from America Online users, which can be quite significant. This situation applies to any ISP or corporation: Because ISPs and corporations generally configure NAT to use a range of addresses from the same Class C range (which includes about 250 addresses), NLB’s Class C affinity has the effect of dedicating one server to entire ISPs and corporations, effectively defeating the whole point of load balancing.

Tip

With the accompanying need to use Class C affinity, who needs session variables? My recommendation: Get your Web developers to rip out those session variables as quickly as possible, and rely on more scalable solutions like back-end databases to store user information. Then you can turn off client affinity altogether, and let NLB do its wonderful job of load balancing incoming Web traffic.

Design Considerations

NLB requires that you consider a number of factors in your design:

  • The first member of a cluster will become its controller, although you can later designate any member as the cluster controller. The IP address information from the cluster controller’s load balanced adapter is replicated to the load balanced adapters of other cluster members.

  • All cluster members must have two network adapters: A load balanced adapter and a back-end adapter. The load balanced adapter must be accessible to clients and have a static IP address, while the back-end adapter must provide access to back-end resources like a database server, and must be accessible to administrators who will manage the cluster.

  • NLB relies on the fact that all Web servers in a cluster see all traffic directed to the cluster. NLB simply determines which Web server will respond to each incoming request. That generally means your Web servers must all be connected to a hub, and not a switch. Switches work by sending incoming traffic only to a specific MAC address; because incoming cluster traffic goes to a virtual MAC address, your switch may either discard the traffic, or may wind up sending the traffic all through a single port to a single Web server, defeating NLB. NLB doesn’t actually use the exact same MAC address for each cluster member, so switches won’t outright fail, but you need to be very careful with the switch’s configuration to avoid traffic problems, including switch flooding.

    The only safe way to use a switch with NLB is to configure the switch to send all traffic to the virtual MAC to all ports connected to a Web server—effectively turning the switch into a hub.

  • NLB will attempt to load balance all traffic directed at the cluster’s load balanced adapters. You must ensure that each cluster member is equipped to handle any traffic that might reach the load balanced adapter. For example, if one cluster member is running a custom software application, then all cluster members must run that application. Otherwise, users will not be able to consistently access the application.

That last point is especially important for cluster administration: All cluster administration must be done on the cluster controller. If you try to connect to the cluster controller’s load balanced adapter, your traffic will be load balanced normally, and may not reach the controller at all. NLB does include a Request Forwarder feature, which detects incoming administrative traffic and forces it to the cluster controller, but the Request Forwarder can significantly reduce the performance of NLB and your Web servers. Instead, simply direct all administrative traffic to the back-end adapter of the cluster controller, which isn’t load balanced. Your network design should accommodate this use, enabling any administrators to easily connect to the cluster’s back end. Figure 6.1 shows a sample network design that illustrates this design technique.

Real-world designs should also include firewalls if your cluster will serve Internet-based clients.

Figure 6.1. Real-world designs should also include firewalls if your cluster will serve Internet-based clients.

Component Load Balancing

Application Center’s NLB feature is designed to load balance incoming client requests across a number of servers, most often Web servers. Many Web developers like to create programming objects to implement business logic or other important functions. For example, a Web developer might need to create a Web page that calculates the monthly payments for a mortgage loan. Rather than programming the calculations in script within the Web page itself, the programmer might choose to create a faster component that conforms to the Component Object Model (COM), or the newer version, COM+. The component can run directly on the Web server, as shown in Figure 6.2.

The COM+ component can access information such as current mortgage rates from the back-end database, and can run independently on multiple Web servers in a farm.

Figure 6.2. The COM+ component can access information such as current mortgage rates from the back-end database, and can run independently on multiple Web servers in a farm.

Note

Application Center’s CLB feature is designed only to work with COM+ components, which are written to a newer standard and support the extra features necessary to make CLB work. Components written to the older COM standard may not work properly, although you’ll need to have a developer actually test them to be sure. In any event, developers can easily repackage older COM components in a COM+ shell, a layer of code that allows the older component to follow the newer COM+ rules and work properly with CLB.

Suppose that the Web site becomes very popular, with hundreds of thousands of users connecting every hour. The Web servers would eventually become overloaded and unable to keep up, especially if a number of COM+ objects were in use for other purposes. One way to ease the strain on the Web servers is to move the COM+ objects to another tier of servers. COM+ contains the ability to send object requests to another server, which would allow the Web servers to focus on the task of serving up Web pages. The COM+ server could, in turn, run all of the COM+ objects and handle the back-end database connectivity. Figure 6.3 shows what the new design might look like.

Each Web server is programmed to send COM+ requests to a specific server in the new middle tier.

Figure 6.3. Each Web server is programmed to send COM+ requests to a specific server in the new middle tier.

Unfortunately, COM+ requires that servers know the name of the COM+ server they want to use. That means each server would have to have a hardcoded COM+ server name, and if that server was unavailable, the Web servers would be unable to access their COM+ objects. That’s where Component Load Balancing (CLB) comes in. CLB enables you to create a load balanced cluster of COM+ servers, and enables clients to access the cluster as if it were one giant server. The failure of a single cluster member doesn’t prevent clients from accessing the cluster, and you can add capacity to the cluster by simply adding additional members.

CLB: Not for .NET

For more information on the .NET development world, seeWhat is .NET?p. 10

Theory of Operation

Application Center enables you to form CLB clusters, in much the same way that you form NLB clusters. CLB clusters must all have the same COM+ components installed, and they must all share a network connection. Unlike NLB, CLB cluster members are not required to have two network adapters. The CLB clients—the computers that will be sending COM+ requests to the CLB cluster—must also have Application Center installed. That’s usually not a problem, because your Web servers will usually be the CLB cluster’s clients, and the Web servers will run Application Center for other reasons anyway.

You configure each Web server with the list of servers that belong to the CLB cluster. This list is dynamic, enabling you to easily add or remove servers from the CLB cluster if necessary. You also have to install the COM+ components on the Web servers, so that the servers’ registries contain the correct class information to handle the components. Finally, you use Windows’s Component Services console to mark the appropriate COM+ components as load balanced, which means they’ll send their requests to the CLB cluster. Each Web server contacts each CLB cluster member about once a second, and measures the respond time from each member. When the Web server needs to access a load balanced COM+ component, it simply contacts the CLB cluster that’s at the top of the list.

You may not always want to install Application Center on every COM+ client, though. For example, consider the network shown in Figure 6.4. In this diagram, a number of Web servers act as COM+ clients to a CLB cluster. However, suppose a number of internal desktop computers also need to access the same components. Does that mean you have to install Application Center on each desktop computer? That would be pretty expensive, so it’s fortunate that you don’t have to. Instead, you can create a COM+ routing cluster. This is a special cluster that runs both NLB and CLB (and therefore has the requirements of both, including having two network adapters in each member). The routing cluster accepts COM+ requests from the desktop clients, and then uses CLB to pass the request on to the CLB cluster. In effect, the routing cluster acts as a middleman for COM+ requests. You can even point the Web servers at the routing cluster, if you want to, instead of using CLB on the Web servers. Figure 6.5 shows how the network would look with the routing cluster in place.

Having the Web servers and desktop clients use the same components is a great idea, since it reuses valuable programming and reduces code maintenance.

Figure 6.4. Having the Web servers and desktop clients use the same components is a great idea, since it reuses valuable programming and reduces code maintenance.

Including at least two servers in the routing cluster ensures that a server failure won’t prevent access to the CLB cluster that sits behind the routing cluster.

Figure 6.5. Including at least two servers in the routing cluster ensures that a server failure won’t prevent access to the CLB cluster that sits behind the routing cluster.

Design Considerations

CLB has a number of requirements and caveats that you need to consider in your design:

  • COM+ components running on a CLB cluster must not make use of any local resources, such as files, which may not be available and identical on every member of the cluster.

  • COM+ components running on a CLB cluster should not try to store any data locally. Instead, they should use a back-end database, which ensures that all cluster members have equal access to the data.

  • Components that will run through a COM+ routing cluster must be able to communicate solely via TCP/IP. That’s because the COM+ routing cluster runs NLB on its client end, and NLB can only accept TCP/IP connections.

  • Components must be written to the newer COM+ specification, not the older COM specification.

  • Components written in Visual Basic 6 (or earlier) are not ideal for use in a CLB cluster. That’s because Visual Basic’s multithreading model isn’t as efficient as the model provided in languages such as Visual C++, where the programmer has more control of the component’s thread handling. The practical result of using Visual Basic 6 components is slower performance on your CLB cluster members.

In general, you also need to decide when the additional expense of a CLB cluster is necessary. After all, CLB means additional servers, additional Application Center licenses, and an additional level of maintenance and administration to deal with. The only way to make this decision is to decide how many extra users your Web servers could support of they weren’t running COM+ components locally. Often, you’ll find CLB clusters to be cost-effective only when the Web servers are running especially complex, long-running COM+ components. In many cases, it may simply be cheaper to buy additional Web servers to handle the load, rather than implementing an entire CLB cluster infrastructure.

Content Deployment and Synchronization

If your organization runs one or two Web servers, deploying new content isn’t a big deal: You just use Explorer to copy files from one location to another. When your Web site begins to grow, however, content deployment and synchronization become time-consuming, error-prone tasks. One of Application Center’s core capabilities is content deployment and synchronization, and it’s designed to take the worry and hassle out of managing large Web sites.

Theory of Operation

Application Center defines content in terms of applications, where an application is everything—Web pages, registry keys, data source names (DSNs), digital certificates, and so forth—that is required for a Web site to function properly. Deploying is the process of copying an application from one Application Center Web cluster, such as a testing cluster, to another, such as a production Web cluster. Synchronizing is the automatic (or semi-automatic) process of ensuring that every Web cluster member has the same application content.

Note

Typical application definitions don’t include COM+ components. That’s because deploying a COM+ component usually requires a server restart. You can define an application to include only a COM+ component, and then deploy that application as needed to update your Web servers. Application Center is even capable of restarting the server automatically when the deployment is complete.

For the same reason, you can never synchronize a COM+ application, because you wouldn’t want your Web servers to suddenly restart themselves when a new COM+ component became available. You can deploy COM+ applications on a schedule, which gives you full control over server restarts.

Deployment is perhaps the easiest to understand, and occurs only on an on-demand basis. So, when you’re ready to deploy a bunch of new Web pages from your testing cluster to your production cluster, you simply command a deployment of the appropriate application from one cluster to the other. Deployment can only occur between Application Center Web clusters, although it’s possible to create a cluster with only one member, which prevents you from having to make a major hardware investment for a development or testing cluster. Deployment always occurs from one cluster’s controller to another cluster’s controller, and never to non-controllers. That’s because synchronization (which I’ll discuss next) is responsible for copying content from a cluster’s controller to all other cluster members. In fact, if you do change the content directly on a cluster member, the cluster controller may overwrite that change at any time with its own, authoritative copy of the application content.

Synchronization can be set up to occur automatically whenever content changes on the cluster controller, or on a scheduled basis. I like to use both methods. Automatic synchronization usually works fine, but if you dump a lot of changes onto the cluster controller at once, the automatic synchronization process sometimes misses a page or two. A scheduled synchronization can be relied upon to catch anything that falls between the cracks, keeping your member servers constantly up to date. Like deployments, synchronization can copy anything that’s included in an application definition, including registry keys, files and folders, and even IIS metabase configuration settings.

Note

Application Center’s ability to manage a dozen servers as easily as one comes directly from the product’s synchronization capabilities. You manage a Web cluster by connecting to its cluster controller and making any changes that you need to make. Application Center then synchronizes those changes—including IIS configuration, registry key, IP address changes, and so on—to the other cluster members.

Synchronization does not include applications that distribute COM+ components. If you have a new COM+ component to distribute, you’ll have to manually distribute it to each server in the Web cluster.

Another neat thing that synchronization lets you do is add new servers to your Web farm with very little hassle. If your Web farm can no longer handle your current user demand, simply get a new server that has two network adapters. Install Windows and Application Center, and add the server to your Web cluster. Application Center can automatically copy all of the application content from the cluster’s controller to the new server, and then place the new server into the NLB load balancing loop.

Design Considerations

Deployment and synchronization are fairly easy to design for. The biggest caveats are that content is deployed and synchronized without encryption of any kind. Web developers sometimes encode database passwords and other sensitive information into Web pages, and Application Center won’t do anything to protect that information. If you’re deploying over a wide-area network (such as from your office to an offsite hosting facility), consider using a virtual private network (VPN) to encrypt the stream of data Application Center sends.

Speaking of over-the-WAN deployments, Application Center doesn’t offer any kind of data compression, either. That means large deployments may take a long time, especially if you have a slow WAN connection. Microsoft includes the Content Deployment Service (CDS) with Application Center, which is meant to address the problem. CDS was originally a part of Site Server 3.0, and Microsoft has made no changes to the version included with Application Center. CDS does do compression for over-the-WAN deployments, but it’s a completely standalone product—there’s no integration with Application Center. CDS doesn’t recognize Application Center clusters, so you’ll have to manually configure the deployment path from one cluster controller to another.

Note

I’m of the firm opinion that Application Center 2000’s lack of compression—and the inclusion of CDS—is a reflection of Microsoft’s determination to ship Application Center too early. Hopefully future versions will include integrated over-the-WAN compression, and get rid of the less-than-ideal product named CDS.

Another design consideration is your content: Application Center is designed to deploy and synchronize entire applications, not pieces of them. Imagine that you’ve set up a development cluster, a testing cluster, and a production cluster. At any given time, your development cluster may have pages in various stages of completion. When you’re ready to deploy, you may want to deploy only a portion of the pages on the cluster to the testing cluster, and then to the production cluster. The way to do that is to configure applications on the development cluster that represent only the bits you want to deploy at that time. Deploy those applications to your testing cluster. There, configure an application that represents the entire functional Web application, and deploy that to your production cluster. Figure 6.6 shows this design technique in action.

You can configure applications on an ad-hoc basis on the development cluster, and use a single application definition on the testing and production clusters.

Figure 6.6. You can configure applications on an ad-hoc basis on the development cluster, and use a single application definition on the testing and production clusters.

Health Monitor

Application Center includes the Microsoft Health Monitor 2.1, which is actually a separate product and a separate installation option. Health Monitor is designed to help you monitor server health, which is slightly different from server performance. Performance is simply a set of numbers indicating how a particular aspect of a server is operating. From your personal experience, performance might be something like your heart rate or blood pressure: It’s a number with no context. Health is the context that tells you whether performance is good or bad, such as a chart listing the acceptable blood pressure range for someone of your age, gender, and so forth. Health is usually measured in ranges, and the ends of those ranges are called thresholds. When you pass the threshold of the acceptable blood pressure range, you’re no longer considered “healthy.” Health Monitor brings performance, health, thresholds, and more together in a consolidated management environment.

Theory of Operation

Health Monitor works by collecting performance information from a number of providers. A provider is simply a piece of software capable of delivering performance data. Most of Windows’s various subsystems—processor, memory, disk, and so forth—deliver performance data to Health Monitor through the Windows Management Instrumentation (WMI) provider. Health Monitor includes some additional providers, which allow it to measure things like the response time for a ping command, whether or not a response can be received from a particular URL, and so forth. A single performance item—such as processor utilization—is called a performance counter.

You define monitors, which are collections of specific performance counters, thresholds, and actions. For example, you might configure a monitor that examined processor utilization, with thresholds of 0% to 60% for good, 61% to 75% for warning, and 76% to 100% for critical. For each threshold, you can also configure actions, which are automated responses that occur when a counter enters a threshold. For example, when the monitor enters the critical threshold, you might have an action that sends you an email message or pages you on your mobile phone, so that you can analyze the situation and take action.

Tip

Health Monitor comes with a number of optional monitors that you can choose to install. These provide preconfigured monitors with thresholds that reflect Microsoft’s opinion of what represents good and bad server health. Even if you just use them as a starting point for customization, these preconfigured monitors can save you a lot of time when setting up your monitoring infrastructure.

The sample monitors include more than just Web server uses, too. Microsoft provides sample monitors for just about every product that a Web site might include, such as BizTalk Server, SQL Server, Commerce Server, and so forth.

Health Monitor also stores performance data in a local database. That local database is self-maintaining, and includes performance data for various periods of time (last 10 minutes, last 24 hours, last week, and so on). Using the Health Monitor console, you can query and consolidate the logged performance data from a number of servers, enabling you to view the health of a single server or your entire Web farm all at once. The logged data is also useful for performance trending, enabling you to determine, for example, when your Web site’s growth will outstrip its capacity, and to do something about it before it becomes a problem.

Design Considerations

In order for Health Monitor’s local performance logging to work, each server must run the Microsoft Data Engine (MSDE), which Health Monitor will install by default. The MSDE is basically a junior version of Microsoft SQL Server 2000. Because SQL Server 2000 is designed to run multiple copies of itself on a single computer, the MSDE installed by Health Monitor will not conflict with any version of SQL Server 2000 that you’ve already installed (although you should reinstall the latest SQL Server service pack after installing Application Center). Health Monitor preloads the MSDE database with stored procedures and SQL Server Agent jobs that keep the database self-maintaining, so you should never have to deal with it. If you’re really concerned about performance, you can deselect the MSDE installation option. I don’t recommend doing so, because the performance hit of the MSDE is barely measurable, and it provides you with a wealth of invaluable performance trending information.

Health Monitor is a convenient addition to Application Center that does give you centralized monitoring of multiple servers—the sort of “many servers as easy as one” philosophy that is the whole point of Application Center. However, if you’re after enterprise-class, all-in-one monitoring, then you want Microsoft Operations Manager (MOM). MOM features most of the same features as Application Center, only on a larger scale. Experts replace monitors, and provide preconfigured thresholds and other features.

Design Considerations

For more information on MOM, seeThe Missing Servers,” p. 69

Perhaps the most important design consideration for Health Monitor is network bandwidth. If you want to gather consolidated information from a dozen Web servers, and they’re located on the other side of a WAN link from your workstation, then you’ll need to be prepared to wait a while for the information to come over. The best solution to this problem is to keep the data on one side of the WAN link. One way to do that is to run the Health Monitor console on one of your Web servers, or on another computer that’s on the same local network as your Web servers. You can use Terminal Services in Remote Admin mode to control remote servers as easily as if you were standing right in front of them.

Design Considerations

For more information on Terminal Services’ capabilities, seeWindows Enterprise Technologies,” p. 87

Supporting Technologies

Like most of the .NET Enterprise Servers, Application Center doesn’t stand alone. Application Center’s CLB features build on functionality provided by COM+, NLB works closely with the Windows TCP/IP stack, content deployment relies entirely on IIS’s architecture, and Application Center’s Health Monitor is built in large part on Windows Management Instrumentation. In the next three sections, I’ll describe these supporting technologies. Understanding what they do and how they work with Application Center will help you make better design decisions when implementing Application Center in your environment.

Internet Information Services

Application Center relies entirely upon IIS to accomplish its magic. Server configuration is synchronized within a Web cluster largely through the replication of the IIS metabase, which contains all of IIS’s configuration information. Application Center also provides a Web-based administration interface that relies upon IIS and ASP to function.

NLB itself doesn’t rely on IIS, although it does support IIS’s ASP session variables. NLB interfaces with the operating system at a much lower level, through the Windows TCP/IP protocol stack. CLB also interfaces with the operating system at a fairly low level, since Component Services and COM+ are integral, core components of the operating system. Health Monitor includes a number of preconfigured monitors that rely on IIS, such as URL response monitors and other monitors that test Web site responsiveness.

Windows Management Instrumentation

WMI acts as a provider of performance data to Health Monitor, providing data for most of the operating system’s performance counters. Although Health Monitor can work with a number of different providers, the WMI provider includes access to the performance counters you’re probably used to seeing in the Windows System Monitor. Interestingly, WMI can also be used to create some pretty powerful scripts that you can use as actions in a Health Monitor threshold. WMI is capable of modifying just about any operating system setting, enabling you to write scripts that restart servers, change user accounts, delete files, and much more, all in response to adverse performance conditions.

COM+

COM+ is an evolution of Microsoft’s original Component Object Model (COM), a programming standard that described how software developers could write applications that interacted with one another in a standard, predictable fashion. COM was originally intended only for local computers; Distributed COM (DCOM) introduced the ability to send COM requests across a network to other computers. Microsoft Transaction Server (MTS) added the ability to perform transactional programming within COM components. All of those features—and more—were rolled up into COM+.

Application Center supports COM+ at the component level, by enabling you to configure applications that can deploy COM+ components to your Web clusters. Application Center relies on COM+ to make CLB work: CLB simply intercepts COM+ calls and distributes them to a server selected from a routing list. COM+ already has the ability to send requests to remote machines; CLB simply adds the ability to load balance those requests across a cluster of COM+ servers.

.NET Enterprise Server Integration

Application Center provides functionality that integrates well with other .NET Enterprise Servers. Commerce Server in particular, as another Web-based product, can benefit from Application Center’s features. For that matter, just about any TCP/IP-based application can use Application Center’s services. In the next two sections, I’ll show you how you can make Application Center work and play well with these other products, leveraging the .NET Enterprise Servers’ integration to produce more effective solutions in your environment.

Commerce Server

Application Center is the perfect companion to Commerce Server, which isn’t surprising, considering the fact that both Application Center and Commerce Server evolved from a single product: Microsoft Site Server. Commerce Server provides a toolkit for developing e-commerce sites; Application Center can then help manage those sites when implemented in a Web farm. Application Center’s content deployment and synchronization capabilities make it easier to add new servers to a Commerce Server site, and to implement a tiered deployment infrastructure.

Commerce Server

For more information on Commerce Server’s capabilities, seeTechnologies Capabilities,” p. 230

Content Management Server

Content Management Server makes a great back-end companion to Application Center. Content Management Server is designed to provide a formal Web content development, editing, approval, and publishing process. You can configure your Content Management Server computers to publish their content to an Application Center Web cluster for final testing, and then use Application Center to deploy the content to your production Web farm.

Content Management Server

For more information on Content Management Server’s capabilities, seeTechnology Capabilities,” p. 260

TCP/IP-based Applications

Although Application Center is primarily marketed (and used) for Web sites, it works well with any application that operates over TCP/IP and within certain rules. For example, suppose you have a custom client-server application within your organization. Application Center would enable you to build a server farm to handle incoming client requests, just as NLB enables you to build a Web farm to handle incoming Web requests. Your server application simply needs to follow some basic rules:

  • Your server application must accept incoming client connections on a single TCP/IP port. Many client-server products, like Exchange Server, use endpoint mapping. Endpoint mapping allows clients to connect on one port, and then obtain a different TCP/IP port for the rest of their communications with the server. Each client is usually mapped to a different port. Application Center won’t work with this type of application, because clients are load balanced to completely different servers. Just because one server in the cluster allocates a particular port to a client doesn’t mean the other servers will honor that allocation.

  • Server applications must rely completely on a common back-end database for data storage. If a server application relies on local storage for client data, then you’ll need to test the server application with one of NLB’s client affinity modes.

  • The entire client-server application must use a connectionless environment, just as Web servers do. In other words, each request sent from the client to the server must initiate a new session, send and receive any necessary data, and then tear down that session. That way, subsequent requests to the server will be load balanced again. Clients should never assume that the server they are contacting has any existing information about them, since clients may connect to a number of different servers over the course of a session.

Very few traditionally written client-server applications meet these rules, which is why Application Center is most often found in Web applications (which inherently meet these rules due to the way the Web works). However, if your organization is developing new applications, you may want to make your application developers aware of Application Center’s requirements. By following these basic rules, your developers can create an application that uses server farms, making the application inherently more reliable and scalable than most traditional applications.

Incorporating Application Center into Your Design

Like most products, you can do a lot to improve Application Center’s effectiveness in your environment by creating a carefully thought-out design that leverages Application Center’s strengths. Since Application Center’s features operate more or less on their own, I’ll use the next four sections to describe how each one can be integrated independently into an overall solution architecture. I’ll also show you how the various features can be used together to create an even more effective design that offers additional features and capabilities.

Designing a Web Farm

Web farms are fairly easy to design, so long as you remember the basic design considerations I laid out earlier in this chapter. Figure 6.7 shows a fairly typical Web farm, which includes a load balanced and back-end network, access to the cluster controller from the internal network, and a back-end SQL Server computer used to store the site’s database.

Note the location of the site’s back-end resources behind the firewall, protecting them from any intruders that make it through the first firewall.

Figure 6.7. Note the location of the site’s back-end resources behind the firewall, protecting them from any intruders that make it through the first firewall.

If your Web farm is hosted at a hosting facility, rather than in your office, you may be limited in the type of network you can set up. Often, you’re restricted to a single firewall, and that firewall may make it difficult to connect to the cluster’s back-end network for administration purposes. In those cases, I recommend the use of a VPN server. Connected to both the load balanced and back-end networks, the VPN server can enable you to remotely connect to the back-end network, and the VPN’s simpler port requirements make it easier to open the necessary ports on the firewall. Figure 6.8 shows how to set it up.

The VPN server isn’t a part of the Web farm, allowing you to connect directly to it without worrying about NLB.

Figure 6.8. The VPN server isn’t a part of the Web farm, allowing you to connect directly to it without worrying about NLB.

Tip

You could set up your VPN architecture so that each client established an independent session to the remote VPN server. The scenario in Figure 6.8, however, provides users with an easier solution, since it makes the remote servers appear to be connected to the local network. Users won’t have to take any extra steps—such as launching a VPN connection—to connect to the remote Web servers.

Designing a COM+ Farm

Application Center’s COM+ routing clusters and CLB clusters offer a lot of flexibility. At their simplest, a Web server can connect directly to a CLB cluster, as shown earlier in this chapter in Figure 6.4. Or, if you need to allow desktop clients to utilize CLB’s load balancing, you can use a COM+ routing cluster, which was shown in Figure 6.5.

Perhaps the most complex COM+ scenario is shown in Figure 6.9. In this design, desktop clients need to use a number of COM+ components, which are shared by the Web site. The Web site also needs to use a number of COM+ components that are unique to the site. Those components are actually divided into two separate CLB clusters, which enables each set of components to scale independently to meet demand. This design requires a CLB cluster and at least two COM+ routing clusters. The benefit of COM+ routing clusters is that they enable a Web cluster to utilize multiple CLB clusters; the Web cluster actually believes that it’s connecting to a single COM+ server, while NLB on the COM+ routing cluster handles the load balancing to a back-end CLB cluster.

The Web cluster connects to one CLB cluster directly, and to two others via a COM+ routing cluster.

Figure 6.9. The Web cluster connects to one CLB cluster directly, and to two others via a COM+ routing cluster.

Designing a Content Deployment Infrastructure

Basic content deployment design, like the setup shown in Figure 6.6, is fairly easy to set up. Keep in mind that testing and development clusters can consist of a single computer (the cluster controller), which saves on server hardware and Application Center licenses.

Tip

I do recommend that your testing cluster include at least two servers. That way, you can test intra-cluster synchronization of your content before deploying to your production cluster.

More complex deployment scenarios are required when your production Web cluster isn’t located at the same site as your development or testing cluster. As shown in Figure 6.10, you can use a VPN server to deploy from a testing cluster located in your office to a Web cluster that’s hosted offsite.

The VPN server can also serve as an endpoint for administrative traffic.

Figure 6.10. The VPN server can also serve as an endpoint for administrative traffic.

Some of the companies I’ve worked with were concerned with the single point of failure that the cluster controller represents. While a Web cluster can continue handling user requests with the controller down, administrative functions and content synchronization require the cluster to be functional. To provide a fully redundant solution, those companies elected to build two Web clusters, and use an external load balancing device to distribute incoming traffic between the two clusters. The idea is a little overkill, but certainly provides maximum fault tolerance. One problem that comes up, though, is when both clusters are hosted offsite. That effectively doubles deployment traffic, since your on-site testing cluster must deploy content to two separate production cluster controllers. The solution is to set up a single-server staging cluster offsite, as shown in Figure 6.11. In this configuration, the testing cluster deploys applications once, to the staging cluster. The staging cluster can then deploy to both production clusters using faster local bandwidth.

You could also use a VPN server in this scenario to encrypt the data sent from the testing cluster to the staging cluster.

Figure 6.11. You could also use a VPN server in this scenario to encrypt the data sent from the testing cluster to the staging cluster.

Alternative Technologies and Products

Application Center isn’t the only product in the world that does what it does. Application Center’s network load balancing capability and health and performance monitoring features aren’t unique, and can be found in a number of other products, and even in some other Microsoft products. Application Center’s CLB feature is absolutely unique to Application Center, though, and you won’t find competing products that provide that capability (in large part due to the fact that CLB has to integrate so closely with COM+ and the Windows operating system). Application Center’s server management features aren’t unique in the world, but Application Center is currently the only product that provides those features for IIS-based Web sites. Finally, Application Center’s content deployment and synchronization capabilities aren’t exactly unique, but Application Center is the only product that offers such broad capabilities for the IIS platform.

Note

I’m obviously making the assumption that your Web site runs on IIS, since you picked up this book on the Microsoft .NET Enterprise Servers. If you’re using a non-IIS Web server, then all bets are off: Application Center won’t work on anything but IIS.

Hardware-Based Load Balancing

A number of companies offer hardware-based load balancing solutions. One of the most popular is LoadDirector. Many high-end switch and router manufacturers, including Foundry, include load balancing capabilities in those devices. Most hardware load balancers can use either round-robin or least connections balancing. Round-robin simply distributes incoming requests to the first server, then the second, and so on, starting over again when the end of the list is reached. Least connections tracks the number of connections each server is handling and sends new requests to the server with the least connections. Some hardware load balancers enable you to configure a weight for each server to reflect its relative capacity; some do not.

The biggest downside to the way NLB works is the fact that all servers must be able to see all cluster traffic, due to the way NLB’s virtual MAC address operates. This technique creates a lot of unnecessary traffic that each server must examine, however cursorily, resulting in a slight performance hit for each Web cluster member. External devices don’t have that problem, although external devices aren’t capable of NLB’s flexibility, which stems from its integration with the Windows operating system.

The best solution, of course, is a hardware load balancing device that can receive information from Application Center. Such a device would know which cluster members were available and which were not, and could even take advantage of better server-utilization data when making load balancing decisions. Microsoft actually built Application Center to support external devices, and developed a complete specification that external devices can use to query Application Center for the necessary information. At this time, I’m not aware of any major-brand devices that implement this option, but keep your eyes open: As Application Center grows more popular, device manufacturers will have more incentive to take advantage of its capabilities in their future offerings.

Health and Performance Monitoring

Health and performance monitoring solutions are numerous, although they can be quite expensive. HP offers their enterprise-class OpenView solution, which is pretty much the king of the monitoring universe, and commands a kingly price, too. NetIQ offers a number of management solutions, some of which were licensed by Microsoft to create Microsoft Operations Manager (MOM), which is Microsoft’s own enterprise-class monitoring solution.

You may have access to other solutions provided by your server manufacturer. Both IBM and Compaq offer fully functional monitoring solutions with their servers: IBM provides a trimmed-down version of its Tivoli subsidiary’s monitoring software, while Compaq’s Insight Manager is well-known for its capabilities and ease of use. Both packages offer monitoring capabilities beyond the server hardware, including URL responsiveness and so forth. None of them, unfortunately, provide any application-specific monitoring for IIS, SQL Server, and other products.

Real-World Example

In Chapter 4, “Matching Business Needs and Technologies.” I introduced you to Pete’s Big Beverages (PBB), a multinational company with a number of outstanding business needs. One of those needs was for better Web management and Web content deployment. In this section, I’ll look at how PBB might use Application Center to solve those needs.

PBB’s public Web site currently consists of four Web servers, half of which run Linux. PBB has already made the decision to move those servers to Windows 2000 Server, and to convert the Web site entirely over to Active Server Pages (ASP). PBB’s big problem is that maintaining and managing four Web servers consumes a lot of time. Whenever a configuration change is made to the site, it has to be made to all four servers independently. Also, content deployment is overly complex: Once the Web staff finally approves content for production use, that content has to be deployed independently to each Web server, which is both time-consuming and error-prone. PBB already has an external load-balancing solution that directs new incoming connections to the server with the least number of connections. PBB’s Web site does not make use of ASP session variables.

Application Center offers a perfect solution for this aspect of PBB’s business problems:

  • NLB can be used instead of the external load balancing solution. NLB provides better load balancing, since it can better account for differences between server hardware, and because it accounts for server utilization, not just least connections.

  • Application Center’s server management features provide the easier Web farm management that PBB is looking for.

  • Application Center’s content synchronization and deployment will provide just what PBB needs for tiered content deployment. PBB can create a developing cluster for content creation, create a testing cluster for final content review, and deploy to the production cluster when the content is ready to go.

  • The fact that ASP session variables aren’t in use on the PBB Web site means that NLB client affinity can be disabled, improving load balancing flexibility. Also, PBB can implement a front- and back-end network, with all cluster management performed via the back-end network. That will eliminate the need to use the Application Center request forwarder.

Figure 6.12 shows a portion of the new PBB network, which includes the production Web farm, the development and staging clusters, and the firewalls that will be used to protect the corporate network from Internet traffic.

Note the use of two firewalls to create a demilitarized zone (DMZ), where the production Web servers are deployed. All other resources sit on the protected network behind the second firewall.

Figure 6.12. Note the use of two firewalls to create a demilitarized zone (DMZ), where the production Web servers are deployed. All other resources sit on the protected network behind the second firewall.

Note

PBB’s Web site will probably make use of other resources, such as a SQL Server computer. These resources would also be deployed on the protected network, along with the developing and testing Application Center clusters.

Summary

Application Center offers a number of different features for your network. Fortunately, those features are largely independent, enabling you to pick and choose the ones you want and ignore the rest. In this chapter, you learned how NLB, CLB, health monitoring, server management, and content deployment all work in an Application Center environment. You also learned about the critical design factors that you need to consider when incorporating Application Center into your environment, and you learned about some of the third-party products that provide similar functionality. I also provided you with several sample designs to help you see how Application Center might fit into your environment.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.136.233.157