Chapter 22. Special Considerations for Client-Server and Web-Enabled Environments

Introduction

The challenge facing many infrastructure managers today is how to apply the structure of these mainframe-developed processes to the less-structured environment of client/servers and to the relatively unstructured environment of the Internet. In this final chapter we look at some special aspects of systems management that we need to consider for these newer emerging environments.

We begin by examining several issues relating to processes implemented in a client/server environment that differ significantly from those implemented in a mainframe or midrange environment. Some of these topics could have been included in their respective process chapters, but I included them here instead because of the unique nature of the client/server environment to these processes. We conclude with a brief look at some of the cultural differences in a web-enabled environment as they relate to systems management processes.

Client-Server Environment Issues

There are five key issues worth discussing in relation to applying systems management processes to a client/server environment:

  1. Vendor relationships
  2. Multiplatform support
  3. Performance and tuning challenges
  4. Disaster-recovery planning
  5. Capacity planning

Vendor Relationships

Vendor relationships take on additional importance in a client/server shop. This is due in part to a greater likelihood of multiple platforms being used in such an environment. Traditional mainframe shops limit their platforms to only one or two manufacturers due to the expense and capabilities of these processors. Fewer platforms mean fewer vendors need to be relied upon for marketing services, technical support, and field maintenance.

Multiplatform Support

However, in a client/server environment, multiple platforms are frequently used due to their lower cost and diverse architectures, which allow them to be tailored and tuned to specific enterprise-wide applications. Client/server shops typically employ three or more types of servers from manufacturers such as Sun, HP, IBM, and Compaq. Both hardware and software vendors are greater in number in a client/server environment due to the variety of equipment and support packages required in such a shop. The sheer number of diverse products makes it difficult for companies to afford to use mostly in-house expertise to support these assets. This means it’s even more important for infrastructure managers to nurture a sound relationship with vendors to ensure they provide the necessary levels of support.

Another issue that the presence of a variety of server platforms raises is that of technical support. Supporting multiple architectures, such as NT or any number of UNIX variants, requires existing technicians to be trained—and likely spread thin—across multiple platforms or the hiring of additional technicians specializing in each platform. In either case, the total cost of supporting multiple technologies should be considered prior to implementing systems management processes.

Performance and Tuning Challenges

A third topic to consider in a client/server environment involves a number of performance and tuning challenges. The first challenge arises from the large number of different operating system levels typically found in a client/server environment. Diverse architectures such as Linux, NT, and all the various versions of UNIX (for example, IBM/AIX, HP/UNIX, and Sun/Solaris) are based on widely varying operating systems. They require different skill sets and, in some cases, different software tools to effectively tune their host server systems.

The various performance and tuning challenges include:

  1. Variations in operating system levels
  2. Impact of database structures
  3. Integration of disk storage arrays
  4. Application system changes
  5. Complexities of network components
  6. Differing types of desktop upgrades

Even shops that have standardized on a single architecture—for example, Sun/Solaris—face the likelihood of running multiple levels of the operating systems when a large number of servers is involved. Key tuning components, such as memory size, the number and length of buffers, and the quantity of parallel channels, may change from one level of operating system to the next, complicating the tuning process. This occurs less in a mainframe or midrange environment where fewer processors with larger capacities result in fewer varieties of operating system levels.

The dedication of an entire server to a specific application is another issue to consider when discussing the tuning of operating systems in a client/server environment. In a mainframe shop, most applications typically run on a single instance of an operating system as opposed to client/server applications that often run on dedicated platforms. You might think that this would simplify operating system tuning since the application has the entire server to itself. But, in fact, the frequent application updates, the expansion of usable data, the upgrades of hardware, and the continual growth of users can make the tuning of these operating systems more challenging than those of mainframes and midranges.

A second tuning challenge involves the structure of databases. As the use and number of users of the client/server application increase, so does the size of its database. This growth can require ongoing changes to a number of tuning parameters, such as directories, extents, field sizes, keys, and indices. As the number of total users increases, the transaction mix to the database changes, requiring tuning changes. As the number of concurrent users increases, adjustments must be made to reduce contention to the database.

A third tuning issue involves the use of large-capacity storage arrays. Improvements in reliability and cost/performance have made large-capacity storage arrays popular resources in client/server environments. Huge databases and data warehouses can be housed economically in these arrays. Several databases are normally housed in a single array to make the storage devices as cost effective as possible. When the size or profile of a single database changes, tuning parameters of the entire array must be adjusted. These include cache memory, cache buffers, channels to the physical and logical disk volumes, the configuration of logical volumes, and the configuration of the channels to the server.

Application systems in a client/server environment tend to be changed and upgraded more frequently than mainframe applications due to the likely growth of users, databases, and functional features. These changes usually require tuning adjustments to maintain performance levels. Application and database changes also affect network tuning. Most firms use an enterprise-wide network to support multiple applications. Different applications bring different attributes of network traffic. Some messages are large and infrequent while others are short and almost continuous. Networks in these types of environments must be tuned constantly to account for these ongoing variations in network traffic.

Desktop computers share a variety of applications in client/server shops. As more applications are made accessible to a given desktop, it will most likely need to be retuned and upgraded with additional processors, memory, or disk drives. The tuning becomes more complicated as the variety of applications change within a specific department.

Disaster-Recovery Planning

A fourth topic to consider in a client/server environment is that of disaster recovery and business continuity. The following list features five issues that make this systems management function more challenging in such an environment. The first issue involves the variation in types of server platforms in a client/server shop, where critical applications likely reside on servers of differing architectures.

  1. Greater variation in types of server platforms
  2. Larger number of servers to consider
  3. Network connectivity more complex
  4. Need to update more frequently
  5. Need to test more frequently

Effective disaster-recovery plans are not easy to develop under the best of circumstances. It becomes even more complicated when multiple server architectures are involved. This leaves disaster-recovery planners with three options:

  1. Select a single server architecture. Ensure this is an architecture on which the majority of mission-critical applications reside and around which one can develop the recovery process at the exclusion of the other critical applications. While this approach simplifies disaster-recovery planning, it can expose the company to financial or operational risk by not providing necessary systems in the event of a long-duration disaster.
  2. Select a standard server architecture. Run all critical applications on this single architecture. This simplifies the disaster-recovery model to a single architecture but may require such extensive modifications to critical applications as to outweigh the benefits. In any event, thorough testing will have to be conducted to ensure full compatibility in the event of a declared disaster.
  3. Design the recovery process with multiple server platforms. Ensure that these platforms can accommodate all critical applications. This approach will yield the most comprehensive disaster recovery plan, but it is also the most complex to develop, the most cumbersome to test, and the most expensive to implement. Nevertheless, for applications that are truly mission-critical to a company, this is definitely the strategy to use.

The second disaster-recovery issue to consider is the larger number of servers typically required to support mission-critical applications, even if the servers are all of a similar architecture. Multiple servers imply that there will be more control software, application libraries, and databases involved with the backing up, restoring, and processing of segments of the recovery processes. These segments all need to be thoroughly tested at offsite facilities to ensure that business processing can be properly resumed.

Network connectivity becomes more complicated when restoring accessibility to multiple applications on multiple servers from a new host site. Extensive testing must be done to ensure connectivity, interoperability, security, and performance. Connectivity must be established among desktops, databases, application systems, and server operating systems. There must be interoperability between servers with different architectures. The network that is being used during a disaster recovery must have the same level of security against unauthorized access as when normal processing is occurring. Performance factors such as transaction response times are sometimes degraded during the disaster recovery of client/server applications due to reduced bandwidth, channel saturation, or other performance bottlenecks. Heightened awareness of these network issues and thorough planning can help maintain acceptable performance levels during disaster recoveries.

Many client/server applications start small and grow into highly integrated systems. This natural tendency of applications to grow necessitates changes to the application code, to the databases that feed them, to the server hardware and software that run them, to the network configurations that connect them, and to the desktops that access them. These various changes to the operating environment require disaster-recovery plans and their documentation to be frequently updated to ensure accurate and successful execution of the plans.

These various changes also necessitate more frequent testing to assure that none of the modifications to the previous version of the plan undermines its successful implementation. Some of these changes may result in new requirements for a disaster recovery service provider, when used; these need to be thoroughly tested by this supplier as well.

Capacity Planning

The final systems management issue to consider in a client/server environment is capacity planning. The use of applications in such an environment tends to expand more quickly and more unpredictably than those in a mainframe environment. This rapid and sometimes unexpected growth in the use of client/server applications produces increased demand for the various resources that support these systems. These resources include server processors, memory, disk, channels, network bandwidth, storage arrays, and desktop capacities.

The increasing demand on these resources necessitates accurate workload forecasts for all of these resources to ensure that adequate capacity is provided. Frequent updates to these forecasts are important to assure that an overall capacity plan is executed that results in acceptable performance levels on a continuing basis.

Web-Enabled Environment Issues

This final section presents some topics to consider when implementing systems management processes in a web-enabled environment. One of the benefits of well-designed systems management processes is their applicability to a wide variety of platforms. When properly designed and implemented, systems management processes can provide significant value to infrastructures in mainframe, midrange, client/server, and web-enabled shops.

But just as we saw with client/server environments, there are some special issues to consider when applying systems management processes to those application environments that are web-enabled through the Internet. Most of these issues center on the inherent cultural differences that exist between mature mainframe-oriented infrastructures and the less structured environment of web-enabled applications. Most all companies today are using the Internet for web-enabled applications, but the degree of use, experience, and reliance varies greatly.

With these environmental attributes of use, experience, reliance and other factors, we can divide companies using web-enabled applications into one of three categories. The first consists of traditional mainframe-oriented companies which are just about to start using widespread web-enabled applications. The second category involves moderate-sized but growing enterprises which started using web-enabled applications early on. The third category consists of dotcom companies which rely mostly on the Internet and web-enabled applications to conduct their business. Table 22-1 shows the environmental attributes and cultural differences among these three categories of companies that use web-enabled applications.

Table 22-1. Environmental Attributes and Cultural Differences

image

Real Life Experience—Tap Dancing in Real Time

A CEO at a dotcom start-up was eager to show his staff how he could display their company website in real-time at his staff meetings. Unfortunately for his IT performance team, he picked the one day when a new operating system release went in improperly and slowed response down to a crawl. The technical support manager had to do some quick tap dancing when he was called into the meeting to explain what happened. Ironically, this was the same manager who had set up the displayable website in the first place.

Traditional Companies

Organizations comprising the first of the three categories are traditional Fortune 500 companies that have been in existence for well over 50 years, with many of them over 100 years old. Most of them still rely on mainframe computers for their IT processing of primary applications (such as financials, engineering, and manufacturing), although a sizable amount of midrange processing is also done. Many have already implemented some client/server applications but are just starting to look at web-enabled systems. The conservative and mature nature of these companies results in a planning horizon of two to three years for major IT decisions such as enterprise-wide business applications or large investments in systems management. Many of these firms develop and maintain five-year IT strategic plans.

IT personnel in this first category of companies average greater than 15 years of experience, with many exceeding 20 years. Since most of these companies have well-established mainframe environments in place, their staffs have valuable experience with designing and implementing systems management processes for their mainframe infrastructures. Their ratio of mainframe years of experience to web-enabled years of experience is a relatively high 10 to 1. Their long experience with mainframes and short experience with the Web can hinder their implementation of infrastructure processes in the web-enabled environment if they are unwilling to acknowledge cultural differences between the two environments.

Infrastructure personnel who work in mature mainframe shops understand how well-designed processes can bring extensive structure and discipline to their environments. If the process specialists for web-enabled applications are isolated from the mainframe specialists, as they frequently are in traditional companies, cultural clashes with process design and implementation are likely. The best approach is to have a single process group that applies to all platforms, including mainframes and web-enabled environments.

Major IT changes in traditional infrastructures are scheduled weeks in advance. A culture clash may occur when the more rigid mainframe change standards are applied to the more dynamic nature of the Web environment. Compromises may have to be made, not so much to compromise standards, but to accommodate environmental differences.

Mainframe shops have had decades of experience learning the importance of effective disaster-recovery planning for their mission-critical applications. Much time and expense is spent on testing and refining these procedures. As applications in these companies start to become web-enabled, it becomes a natural progression to include them in the disaster-recovery plan as well. The conservative nature of many of these companies coupled with the relative newness of the Internet often results in them migrating only the less critical applications on to the Web and, thus, into disaster-recovery plans. It is a bit ironic that companies with the most advanced disaster-recovery plans use them for web-enabled applications of low criticality, while firms with less developed disaster-recovery plans use them for web-enabled applications of high criticality.

The maturity of traditional companies affords them the opportunity to develop meaningful metrics. The more meaningful the metrics become to IT managers, to suppliers, and especially to customers, the more these groups come to rely on these measurements. Meaningful metrics help them isolate trouble spots, warn of pending problems, or highlight areas of excellence.

Moderate and Growing Companies

Companies in this category are moderate (less than 5,000 employees or less than $5 billion in annual sales) but growing enterprises which have been in existence for less than 15 years. They run most of their critical processing on client/server platforms and some midrange computers but have also been using web-enabled applications for noncritical systems from early in their development. These firms are now moving some of their critical processing to the Web. The size and diversity of many of these up-and-coming firms are expanding so rapidly that their IT organizations have barely a year to plan major IT strategies.

IT personnel in this type of company typically have five to 10 years of experience and a ratio of mainframe-to-web experience approximating 1 to 1. The IT staffs in these companies have less seniority than those in traditional mainframe shops, unless the company is an acquisition or a subsidiary of a traditional parent company. Because their mainframe-to-web experience is lower, staffs in this category of company are usually more open to new ideas about implementing infrastructure processes but may lack the experience to do it effectively. Creating teams that combine senior- and junior-level process analysts can help mitigate this. Setting up a technical mentoring program between these two groups is an even better way to address this issue.

The structure and discipline of the infrastructures in these moderate but growing companies are less than what we find in traditional companies but greater than that in dotcom firms. This can actually work to the company’s advantage if executives use the initiatives of infrastructure processes to define and strengthen the structures they want to strengthen. Major IT changes tend to be scheduled, at most, one week in advance; normally, they are scheduled only a few days in advance. This can present a challenge when implementing change management for important web-enabled changes. Again, executive support can help mitigate this culture clash.

Disaster-recovery plans in this type of company are not as refined as those in traditional IT shops. Because many of the applications run on client/server platforms, disaster recovery is subject to many of the issues described in the client/server section of this chapter. The good news is that some type of disaster-recovery planning is usually in its early stages of development and can be modified relatively easily to accommodate web-enabled applications. These systems are usually of medium criticality to the enterprise and should therefore receive the support of upper management in integrating them into the overall disaster-recovery process. Meaningful management metrics in these companies are not as well developed or as widely used as in traditional companies and should be an integral part of any systems management process design.

Dotcom Companies

Dotcom companies are interesting entities to study. During the late 1990s, the use of the term became widespread, signifying the relative youth and immaturity of many of these enterprises, particularly in the area of their infrastructures. In comparison to the other two categories of companies, dotcoms were new on the scene (with an average age of less than five years). Most of their mission-critical applications are centered on a primary website and are web-enabled. Some of their supporting applications are client/server-based.

The culture of most dotcoms is quick and urgent, with events taking place almost instantaneously. The planning horizon for major IT decisions rarely spans a year and it can be as short as three months. One such company I consulted for actually decided on, planned, and implemented a major migration from SQL Server databases to Oracle databases for its mission-critical application—all within a three-month period. The challenge in these environments is to design systems management processes that were flexible enough to handle such a quick turnaround yet robust enough to be effective and enforceable. Thorough requirements planning helps greatly in this regard.

Dotcom personnel generally have an average of one to five years of IT experience. Typically, there are one or two senior-level IT professionals who help launch the original website and participate in the start-up of the company. The entrepreneurial nature of most dotcom founders results in their hiring of like-minded IT gurus who are long on technical expertise but short on structure and discipline. As a result, there is often a significant culture clash in dotcoms when they attempt to implement infrastructure processes into an environment that may have thrived for years with a lack of structure and discipline. Since there is negligible mainframe experience on staff, systems management processes are often viewed as threats to the dynamic and highly responsive nature of a dotcom—a nature that likely brought the company its initial success. At some point, dotcoms reach a size of critical mass where their survival depends on structured processes and discipline within their infrastructures. Knowing when this point is close to being reached, along with addressing it with robust systems management processes, is what separates sound dotcom infrastructures from those most at risk.

The average planning time for major IT changes in a dotcom is 12 to 48 hours. Implementing a change management process into this type of environment requires a significant shift in culture, extensive executive support, and a flexible, phased-in approach. With few infrastructure processes initially in place at a dotcom, the likelihood is small that disaster-recovery planning exists for any applications, let alone the web-enabled ones. Since most mission-critical applications at a dotcom are web-enabled, this is one of the processes that should be implemented first. Another infrastructure initiative that should be implemented early on, but often is not, is the use of meaningful metrics. The dynamic nature of dotcoms often pushes the development and use of meaningful metrics in these companies to the back burner. As previously mentioned, dotcoms reach a point of critical mass for their infrastructures. This is also the point at which the use of meaningful metrics becomes essential, both to the successful management of the infrastructure and to the proper running of web-enabled applications.

Summary

This chapter identified several special issues to consider when implementing systems management processes in a client/server or a web-enabled environment. The client/server issues involved vendor relationships, multiple platform support, performance and tuning challenges, disaster-recovery planning, and capacity planning. A discussion of each issue then followed, along with methods to address each one.

The web-enabled issues were presented in terms of three categories of companies in which they occur. We looked at environmental attributes and cultural differences for each of these categories and put them into perspective in terms of their impact on web-enabled applications.

Test Your Understanding

1. Performance factors such as transaction response times are sometimes degraded during the disaster recovery of client/server applications due to reduced bandwidth, channel saturation, or other performance bottlenecks. (True or False)

2. The use of applications in a client/server environment tends to expand less quickly and more predictably than those in a mainframe environment. (True or False)

3. Which of the following is not a performance-tuning consideration for a client/server environment:

a. impact of database structures

b. standardization of network interfaces

c. application system changes

d. differing types of desktop upgrades

4. Huge databases and data warehouses today can be housed economically in _______________ .

5. Summarize the cultural differences between a traditional mainframe data center and those of a web-enabled environment.

Suggested Further Readings

1. Real Web Project Management: Case Studies and Best Practices from the Trenches; 2002; Shelford, Thomas J., Remillard, Gregory A.; Addison-Wesley Professional

3. Advances in Universal Web Design and Evaluation: Research, Trends and Opportunities; 2006; Kurniawan, Sri, Zaphiris, Panayiotis; IGI Global

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.150.80