Chapter 3. Project Management Considerations

Implementation Requirements

Although the material in this chapter is targeted mainly toward the project manager of a Terminal Server/MetaFrame implementation, I feel it’s worthwhile for anyone involved in the project to have a clear understanding of the implementation requirements—not only what they are, but also why they’re needed. Many technical people I’ve met feel this is really nothing more than “fluff” and has little impact on the “real” work they’re required to do in order to drive the project to completion. While the technology is certainly important (or you wouldn’t be reading this), without clearly outlining why it’s being used, who will use it, and how it will be implemented, it’s highly likely that the end product won’t function as you or the business expected.

Poor planning leaves the door open for one of the most common project ailments, which is known as project or scope creep. This is the slow and continuous expansion of features and support that were never really a part of the initial project considerations but have somehow found their way into the list of requirements that must be met in order to consider the project a success. Many technical people fall victim to this. Some administrators agree to include something that is out of scope simply to get it out of their mind so they can concentrate on something else. Others often agree to do it simply to satisfy their ego in an attempt to prove that any expectation can be met and any requirement delivered, even if the possibility of meeting such a challenge is realistically impossible due to time constraints or other mitigating factors such as technical limitations.

As a result, most loosely defined and/or managed projects are beset with problems, some of the most common being the following:

  • Significantly over budget—Poor planning and unforeseen difficulties require additional human and financial resources. While proper planning will not always protect against this, areas that are considered the most critical can be investigated and alternative solutions taken into consideration as part of the overall scope of the project.

NOTE:

An odd offshoot of running a project over budget is being forced to manage a project that is confined to completely unrealistic budget constraints. Typically these have been imposed by upper management who adamantly insist that this is the cap that cannot be exceeded for the project. The unfortunate part of this is that this budget has usually been approved by the administrator’s immediate supervisor based on sizing estimates that are completely unrealistic for the project at hand. The unlucky administrator in this situation is forced to make one of two choices.

  • Try and acquire the hardware and software needed to successfully implement the project, while at the same time fitting it into a completely unrealistic budget.

  • Argue the point that such restraints will surely result in an implementation that will not meet the needs of the business and as a result will be viewed as a failure by the end user community.

Obviously being able to argue the second point requires a clear understanding of what is required, as well as a realistic budget that will bring those requirements to life. Simply saying that there isn’t enough money is not going to be very convincing.

The disappointing truth is that most administrators will opt for the first point and through shear determination hope to bring the project to life. The end result will likely be what you expect: an environment that is underpowered with second-rate equipment that will need to be upgraded or replaced within six months’ time.

  • Late delivery—A lack of thorough project planning and realistic time management will always impact the delivery dates.

  • Reduction of scope—Either portions of the project that were originally in scope are omitted, or functionality that was originally planned has to be deferred or dropped completely. Many times a creep in the initial scope of the project can see the delivery of features or components not necessarily required, while other areas initially included end up being omitted.

  • Negative impact to the business—This includes situations where the implementation introduces some major obstacles that require repeated rollbacks and redeployments to correct, affecting not only delivery dates and the project budget but also the user’s confidence in the product. A poor impression by the end user community can be far more damaging not just to your reputation but also in the confidence that you can deliver on future requirements or changes. An issue on a Terminal Server can potentially be seen by and impact tens or hundreds of users, so taking the time to ensure a solid implementation is critical.

  • Failure to manage end users’ and management’s expectations—Quite often people will read the hype surrounding Terminal Server and MetaFrame and feel they can deliver everything for nothing. This is not the case, and failure to manage users’ expectations from the start will surely result in negative repercussions not only at the end but throughout the project.

  • Project cancellation—The most drastic (and far more common than you might think) result of any of the problems described in this list, and usually a combination of all of them, is the cancellation of the entire project. Not only is such a result incredibly demoralizing to you and your administrative team, but it also negatively impacts the user community, who quickly lose faith in their IT department to deliver the solutions required by the business.

While it has been argued that these difficulties happen more often than not and are simply a part of any project, I have yet to see a Terminal Server implementation encounter any of them when proper implementation planning and project management have occurred.

NOTE:

Very often when I hear people complain about issues they have had with a Terminal Server implementation, the problems could easily have been avoided if the proper implementation planning had been performed.

One factor in minimizing potential problems arising during an implementation is to understand what I consider the five key implementation requirements:

  • Documentation

  • Leveraging desktop deployment flexibility

  • Defining the scope of Terminal Server in your business

  • Enlisting executive sponsors

  • Not promising what you can’t deliver

Documentation

The root issues with documentation are twofold:

  • Traditionally, documentation has been very inconvenient to manage. There was no easy way to store documentation so it could be easily located, searched, or modified. Document management systems exist, but many organizations don’t have the money to implement or resources to manage such an environment.

    The growth of the Internet as a business tool has greatly accelerated the development of products for authoring and managing Web-based information. The ease of use and availability of these tools, coupled with corporate intranets and extranets, has had an extremely positive impact on document management. Documentation can now be stored on an intranet Web site, easily accessible to anyone who’s interested. Considering that Windows ships with both Web server and Web browser software, and that word processing tools such as Microsoft Word provide sophisticated capabilities for document creation and collaboration, including HTML creation, there are very few excuses for not having current, accessible documentation for your Terminal Server project. Even a page with links to the relevant Word documents can be a valuable tool in keeping the team members up-to-date with the necessary information.

    Unfortunately it is the thinking of many managers and most technical staff that the time required to configure and manage a document library could be better spent working on the implementation itself. Even though a logical and simple folder structure could be created to centrally store the necessary documentation, convincing people that it is a worthwhile endeavor is another matter.

  • And of course, accessibility alone doesn’t guarantee that documentation will actually be created. Time and resources must be available to ensure that the necessary documentation is written. Even if you’re not responsible for managing the implementation, don’t fool yourself into believing that you’re saving time by not writing documentation. In fact, you’re doing the opposite. Documentation is much like insurance, whose inherent worth becomes apparent only when it’s needed. Documentation is the foundation for training, upgrades, and even disaster recovery of your Terminal Server environment. When your applications or systems are running smoothly, you don’t think about documentation, but if disaster strikes, documentation can provide valuable information that might otherwise be forgotten.

NOTE:

I can’t even count how many times I’ve heard the statement that documentation will be written after the project is completed. Very rarely has this ever happened. By the end of the project, resources such as time and/or money have run out (because of a lack of documentation and planning, perhaps?) and another project is usually waiting to take its place.

Writing documentation makes you think about what you’re doing and why you’re doing it. It will help you uncover problems or deficiencies in your plan before they have the opportunity to affect your project.

As the project progresses through the implementation, documentation provides a clear footprint of where you’ve been and where you’re going. This lets people new to the project be brought up to speed quickly, without misinterpreting the project’s purpose, scope, and direction. Documentation also provides an audit trail, allowing you to review why certain decisions are made and greatly reducing the possibility of second-guessing a direction that was taken much earlier in the project’s life.

While poor or nonexistent documentation may not doom your project to failure, having documentation that is current and accurately reflects the current and future state of your environment will go a long way to ensure its successful completion. Keeping a pad of paper readily available while you’re working on your servers will allow you to quickly jot notes that you can refer back to later. It will certainly help you member tasks you’ve performed that you may later forget, particularly if you are extremely busy and become distracted troubleshooting some other problem.

Leveraging Desktop Deployment Flexibility

A Terminal Server implementation is a unique combination of client and server interaction that’s unlike a traditional Windows server deployment. Although changes will be made to the client desktop, removing some or possibly all applications from the user’s local control, the driving factor of the implementation is not the desktop. The desktop is simply the tool that you will use to deliver Terminal Server access to the user.

Terminal Server provides you with additional flexibility in your deployment that you wouldn’t normally have in a traditional desktop upgrade project. Leveraging this flexibility will greatly reduce the impact on end users during the rollout. The goal is to introduce Terminal Server to them with minimal disruption to their work productivity.

Some areas where this flexibility can prove quite valuable are

  • Piloting—Imagine being able to provide the user with two computers during a pilot. One contains the desktop that the user is piloting, and the other contains the user’s original desktop. As soon as the user encounters a problem with the pilot computer, he or she simply returns to the original desktop until the problem is resolved. The Terminal Server client provides this same functionality. During piloting, the user can run in his or her new Terminal Server environment, returning to the local desktop if problems arise.

  • Testing validation—One problem that continually arises when you are performing a desktop upgrade is that differences in the client hardware and software can pose compatibility issues for the applications you’re deploying. What works fine on one desktop refuses to run properly on another desktop. The centralized nature of Terminal Server overcomes this problem and provides true validation for your piloting. Testing with a small group of users has greater value in a Terminal Server deployment because once the application is working properly for them, it will work for all users. Testing that Terminal Server is working properly doesn’t require that you deploy it to a large number of users. By keeping the initial test groups small, you eliminate many problems early on while being able to quickly respond to users’ issues as they arise. If Terminal Server isn’t working for 5 users, it won’t work any better for 50. Large-scale pilots add no value until you have validated the small test group. Don’t fall into the trap of trying to do too much too quickly.

  • Training—Terminal Server provides you with the ability to train a user on his or her Terminal Server session from any location, on any client device. Traditional training would involve sending users to a room to train on a generic computer with a configuration that usually didn’t accurately reflect the user’s personal computer setup. Terminal Server lets users see a familiar session, regardless of where they’re physically located. Training is more consistent with how they’ll actually be working. I’ve been involved in a couple of implementations in which a training room was established with Terminal Server accessible both to demonstrate the new environment and also to train administrative and support staff prior to initiating user testing. The ability to provide administrators and support staff with a look at the environment prior to having to assist users can be valuable in optimizing the time required to resolve an issue with a user.

  • Application migration—It’s very likely that not all users will have all their local applications moved to Terminal Server. This is an advantage of Terminal Server that isn’t always apparent and in many cases is actually played down. Many references to Terminal Server imply that an implementation requires moving all local applications to the server. I feel that this thinking is tied too much to the traditional desktop upgrade, where all applications are moved to the new computer or reinstalled on the new operating system. Terminal Server provides the unique advantage of moving specific applications into the server-based computing environment while leaving other applications, such as those requiring special hardware, on the local computer. Too often I see issues arise in an implementation where a single application that’s required by only a few users is holding up an entire deployment. See the next section for more details on this.

Defining the Scope of Terminal Server in Your Business

Human and financial resources and the types of applications in your current user environment will determine the scope of Terminal Server in your business. One of the decisions that will be made early on in the planning stage of your deployment is determining what the target user community will be. As mentioned in the preceding section, a common problem with many implementations is the misconception that all the user’s applications must be moved off the desktop and onto the Terminal Server if any benefits are to be seen in the total cost of ownership (TCO). This simply isn’t true. In fact, even moving only a subset of a user’s applications can reduce TCO. The key is choosing the right software to move:

  • Moving applications that have few support costs or are updated infrequently will show only a small reduction in TCO. Usually the benefits in this situation are not seen until the next major upgrade is required.

  • Some prime targets to move are those applications that currently have large support or deployment costs. In most circumstances, the resulting move and standardization of the running environment to Terminal Server will help reduce many of the application issues and costs.

Terminal Server is not an all-or-nothing solution. All of the application migrations do not have to be planned to happen simultaneously, but can instead be staggered to happen when most appropriate for both the users and the administrator. Applications that don’t fit within the scope of the project should be left to run on the user’s desktop until a later date. If you want to have a successful deployment, clearly define the scope of the project from the beginning and deviate from it as little as possible. The two areas where I have most often seen a change in the scope of Terminal Server implementations are as follows:

  • Client scope—Partway into the project, a decision is made to expand the client scope to include another user group that appears to be similar to the currently targeted users. Very often this includes adding applications required by these new users. In almost every situation where this has happened, hardware capacity or application issues have caused unexpected problems and delays in the implementation. Whenever possible, avoid modifying the client scope once you have moved beyond planning into piloting or the actual deployment. I discuss determining client scope in the later section “Developing the ’To-Be’ Model.”

  • Application scope—Although you’re likely to add and remove applications during the planning, testing, and even the pilot stage of you project, the deployment stage is not the time to be making application additions or modifications. Almost every project will encounter a situation in which an application will be discovered during deployment that would be a good candidate for inclusion in the project. Don’t add these applications to your environment unless absolutely necessary. I would suggest instead that the information be inventoried so it can be reviewed and prioritized at a later time. Any unplanned modifications will almost always have a negative impact on the production environment, particularly if they haven’t been part of any previous testing or piloting.

Enlisting Executive Sponsors

Every project must have three things in order to succeed:

  • Leadership

  • Money

  • Human resources

To ensure that these elements are available for the duration of your project, you must enlist the support of one or more executive sponsors. An executive sponsor is a senior person within your business who can ensure that the changes you want to introduce are accepted and endorsed by the company from the top down. Top-down knowledge of the project ensures alignment with the company’s strategic direction and is your only weapon in dealing with users and management who introduce resistance to your project for political or personal reasons. If you attempt to work in isolation without this top-down support, you will repeatedly have to justify your intentions and run the risk of losing access to one or more of the listed elements.

NOTE:

I once was called in to consult for a company that was having issues with a Terminal Server implementation. They had deployed the product to approximately 200 users, and application problems were affecting the users’ ability to work. After speaking with a couple of users, I found out that neither they nor their manager had received any prior notification that this change was even being made. Inadequate piloting and training had resulted in an unacceptable release situation.

Within a week the project was terminated, and all users were rolled back to their original desktops. The decision to deploy Terminal Server had come from the server support department without the official support of the end users’ management. Even though the decision to deploy Terminal Server made sense both technically and from a business perspective, because the proper people had not endorsed it, the project was not allowed to proceed.

Don’t Promise What You Can’t Deliver

While it sounds simple, not delivering on promises is often the issue that causes the most problems. I don’t know how many times I have heard someone promise things such as increased performance or greater stability without having any clear idea whether they could deliver this. If an application is slow or buggy on the local desktop, there’s no guarantee that moving it to Terminal Server will eliminate either of these problems. It depends on whether the issues are due to the client desktop or the application itself. An application that leaks memory when run on the desktop will continue to leak memory when run on Terminal Server.

Make sure that you set realistic expectations for users. If you end up delivering more, all the better—but don’t promise things you cannot deliver.

Business Process Management

An important part of managing a Terminal Server project is being able to manage the migration of the business processes effectively from the existing “as-is” model to the future “to-be” model. This migration is often called business process reengineering (BPR). When planning the BPR, there are two things you’re trying to achieve:

  • Minimizing the change in how users must perform their work—You want to integrate Terminal Server into their environment with as little disruption as possible.

  • Optimizing the process of managing the end user—By implementing Terminal Server, you’re looking to reduce the support requirements of the user and optimize the support that must still be performed. This includes support at the server as well as the client.

You need to consider the changes that you’re introducing to the way in which these jobs have traditionally been done. For example, the existing activities required for maintaining a single desktop will be reduced or eliminated. Application support, hardware support, software upgrades, and training will all be affected. To maximize the benefits of your Terminal Server implementation, you must communicate these changes as effectively as possible. A clearly documented change in business processes will also minimize the uncertainty and misunderstanding around what exactly Terminal Server is bringing to your organization.

TIP:

One common area with a large amount of uncertainty is the end-user support department. One of the key factors for introducing Terminal Server is the reduction in desktop support costs and hence a reduction in TCO. Most desktop support staff members equate the introduction of Terminal Server with the elimination of their jobs. While this is rarely the case, it is true that Terminal Server will affect how end-user support staff perform their jobs. By involving these support people early and making it clear what their support role will be once the implementation is complete, you’ll have a much easier time with enlisting their cooperation to support you during the project.

To develop an effective BPR plan, you need to have an understanding of how the processes exist today and how they will work after the implementation. These are known as the “as-is” and “to-be” business process models, and together they form the roadmap for your Terminal Server implementation.

Developing the “As-Is” Model

The starting point for developing your BPR plan is determining what your business processes are today. This is commonly known as your “as-is” model. In an ideal world, a company would have an “as-is” model available with all the necessary information. In most cases, some form of a model will need to be developed. In forming this model, you need to concentrate on four areas:

  • Users and user support—Look at how the users work today: which applications they commonly use and which they don’t. Before you begin, you will likely already have an idea as to what groups you will be targeting for the Terminal Server deployment. When preparing the “as-is” model, be sure to look for such things as one-off applications or other exceptions that will need to be flagged and accounted for when planning the implementation. Note what additional hardware users might use or access, particularly such things as file or print servers, scanners, modems, or even proprietary “dongles,” cabling or other hardware-based licensing mechanisms required by certain legacy applications. Document concerns, issues, and suggestions that users may have about the existing environment. This will help in determining hardware requirements, pilot user groups, initial implementation groups, special needs’ users, and exceptions that may need to be excluded from your Terminal Server deployment. Often, you’ll be able to establish a group of users during this time that will be used to pilot and test the initial Terminal Server environment.

    Another valuable user consideration is the establishment of measures for such factors as logon times, application speed, time to access network resources, and time required for switching between applications. You shouldn’t spend a lot of time attempting to gather large amounts of detailed quantitative data, but some average times can be useful in flagging deficiencies in the infrastructure that may need to be addressed prior to a Terminal Server go-live. You may also be able to determine areas where Terminal Server can save time or boost performance of existing hardware. Often this is achieved by having the Terminal Server and one or more file servers located on the same physical network (see the next point). This proximity can dramatically enhance the performance of resource access and is usually noticed right away, particularly by the “power” users. Remember: Don’t promise performance gains with Terminal Server until you’re sure you can deliver them.

  • Network and network support—The inclusion of the most accurate network infrastructure information available is a critical component of your “as-is” model. A network diagram should be available that includes all relevant client and server networks that will interact with your Terminal Servers. The types of networks and the supported protocols should also be included. A key piece of information is why servers have been placed in certain locations and if there are any issues with moving them. This is important when looking at where the Terminal Servers will be situated on the network. You’ll need to co-locate them with whatever other servers users will need to access through their Terminal Server sessions. Knowing which servers can be moved and which can’t will help in developing an accurate “to-be” model. A network support contact will be required not only to assist with any network issues that may arise but also as a resource for accurate information on network capacity and future direction.

  • Servers and server support—You need to inventory which server hardware and associated operating systems are currently in production. Of particular interest will be those systems utilized by the end user, such as file and print servers. As complete a Windows domain diagram as possible, with the appropriate trust relationships, should be included if anything other than a simple forest/domain configuration is in effect. WINS, DHCP, DNS, and other network servers should also be noted, depending on the network protocols being used. As I mentioned, you will want to try and co-locate all the necessary servers accessed by the users with the Terminal Servers, if possible. If all your users will be accessing a particular file server, then it may be necessary to ensure that file server is placed on the same network as the Terminal Servers. A local domain controller will also be necessary to ensure timely authentication and logon script processing.

    Determine what domain the Terminal Servers will reside in and what users are responsible for administering these domains. Terminal Server will require the creation of customized administrative groups in the domain to support the environment, so including the administrators in the planning process is critical. While in many environments the Terminal Server administrator is also a domain administrator, this is not always the case. Don’t expect to have anyone’s full cooperation if you spring your Terminal Server requirements on him or her at the last minute. A clear understanding of how Terminal Server will fit into the infrastructure is another key requirement to a smooth implementation.

  • Software—Determine the current software standards within the organization, as well as any exceptions that should exist. Knowing the number of licenses that exist for software is important in determining how the software will be made available on a Terminal Server and whether access restrictions may be required. It’s not uncommon to publish applications and manage their licensing by restricting access to members of a specific Windows security group or implementing restrictions on the total number of concurrent instances of a running application. When combining applications from different departments on a Terminal Server it is important to verify that all the applications can operate with the same version of software, such as the Microsoft Data Access Components (MDAC) or the Java Runtime Environment (JRE).

Developing the “To-Be” Model

I’ve always looked on the development of the “to-be” model as the creation of the answer to the question, “What are you trying to achieve?” The simple answer is that you’re deploying Terminal Server and moving software off the local desktop to run on the server(s), but the complete answer is much more than that. Your “to-be” model will provide answers to the following questions:

  • Who will be using Terminal Server?

  • How were these people chosen?

  • What software are you going to deploy?

  • Which users will use what software?

  • What deployment method will you use (desktop replacement, application replacement, and so forth?

  • Where will the servers be located, and who will be responsible for supporting them?

The most common way to develop the “to-be” model is to work from the “as-is” model to determine the end state in each documented situation. For example, in your “as-is” model you will have noted existing user configurations along with the processes in place to support them. In your “to-be” model, you document how these setups would change to reflect the Terminal Server environment. The information doesn’t necessarily need to be extensive but does need to clearly point out the changes that will be occurring. The changes in how the support staff handles client hardware issues might go something like this:

If a user is having any hardware-related issue, then the desktop support person shouldn’t attempt to repair the computer at the user’s desk. The user should be directed to sit at an alternate desk if available and connect to Terminal Server through that machine until the first one has been repaired. If no alternate desk is available, a Windows terminal or backup PC should be provided temporarily, to let the user have Terminal Server access until a replacement machine can be delivered. Under no circumstances should the client hardware be disassembled or repaired while the user waits. Getting the user up and working again as quickly as possible should be the first priority.

Other examples might include the following:

  • Updating the network diagram to show the position of the Terminal Servers and any other new or moved hardware within the infrastructure—A visual representation of your implementation is an excellent way to describe what is happening.

  • Developing Terminal Server–specific support procedures for your help desk so they can efficiently handle calls from your new users—This would include training on how to use the remote control features of Terminal Server and accurately redirect calls to an application or Terminal Server administrator if necessary. The need to immediately deploy a support person to the user’s desktop is not necessary in most situations.

To ensure that your “to-be” model is what’s required by the business, take the time to describe the implementation clearly and note where you anticipate possible issues. Work closely with the appropriate contacts within the business to ensure that your goals are in line with both their requirements and yours. Afterward, you’ll have a plan for implementing Terminal Server that’s both clearly understood and accepted by the business.

Policies and Procedures

An important part of developing your “to-be” model is creating or modifying policies and procedures for both your clients and your servers. These will be used not only for managing and constructing the environment during implementation but also for continued management once in production. Create policies and procedures that will add value and lead to a more manageable environment—don’t create them simply because you can. Before putting any policy or procedure in place, ask yourself these questions:

  • Does the policy or procedure you want to introduce resolve or control an issue that exists today or is anticipated to exist in the near future? An example might be the establishment of disk quotas on personal file areas to resolve issues with exaggerated disk consumption by users with non-work-related data.

  • Is this policy or procedure easy to communicate? Will it be understood easily by others?

  • Is it simple enough that the people who are supposed to abide by it (users, developers, and administrators) will do so, or will they seek ways to circumvent it?

  • What are the possible ramifications if these policies or procedures are not implemented?

Selecting Policies to Implement

Your company most likely already has a number of policies and procedures in place. They are most often implemented to protect the business from legal recourse, lost revenue, or tarnishing its public image. Some common examples include the following:

  • A ban on the use of noncorporate or pirated software from home or the Internet. Many viruses and worms are introduced into a corporate network by users running pirated software they have acquired off of the Internet. Being caught with pirated software during an audit can be damaging to both the finances and the reputation of an organization.

  • A ban on the viewing or possession of pornographic and other material (such as MP3s or pirated movies). The downloading of multimedia files of any kind can also place a large burden on a company’s Internet or storage resources.

  • Rules concerning storage of personal data such as personal tax information or children’s projects on corporate computers.

  • Rules against storing company information on a local PC instead of in an environment (such as a network) where the information is more secure, both from theft and loss through accidental or purposeful destruction.

  • Taking suitable measures to protect against the theft or destruction of company property, including corporate “secrets.”

Although each of these issues is important, all are behavioral policies and none are specific to the Terminal Server environment. The policies can exist without requiring Terminal Server, and Terminal Server can exist without these policies.

Terminal Server can be used to make the enforcement of these policies easier. For example, using Terminal Server as a complete desktop replacement and providing the user with only a diskless Windows-based terminal would greatly simplify enforcement of a policy concerning running pirated software or local storage of company information for these users.

When developing Terminal Server policies and procedures, concentrate on those that directly relate to its creation, administration, or support.

Many policies and procedures for Terminal Server are identical to those designed for the regular desktop environment. The key difference is in the amount of effort expended in enforcement. Enforcement of policies and procedures is inherently difficult when it must be taken out to each user’s desktop. The most obvious reason is that it’s nearly impossible to monitor or control what users are doing without a high level of maintenance.

These are some of the common areas where you may want to develop policies or procedures for Terminal Server:

  • Terminal Server system installation, configuration, and disaster recovery—By developing a standard procedure for the creation of your Terminal Servers, you ensure that additional machines can be built at any time to augment the current environment or recover in a disaster situation.

  • Commercial software selection, installation, upgrades, and back-out plans—A policy and procedure on how software that runs on Terminal Server is managed is very important. It lets you determine quickly whether a piece of software is suitable for running in the environment, along with guidelines for testing and implementing it into production.

  • Software developer guidelines—This includes such things as documenting the software installation requirements for the applications in your environment. This is probably the most difficult policy to put into place in a large corporation. Because in-house developers move around frequently and new projects are born quickly, such a policy can be difficult to communicate and enforce in a timely fashion. Very often you’ll encounter an application that needs to be deployed on Terminal Server but wasn’t tested in such an environment through the proper development process.

    A well documented policy and procedure would need to exist to ensure that the Terminal Server administrator will not allow such an application to be deployed into the production environment without passing a set of minimum requirements, ensuring that its introduction does not impact the other applications or users on the system.

NOTE:

Remember that most developers and their managers will know very little about Terminal Server and will most often completely ignore it until their application is complete. At this point, it can be difficult for them to make code changes if necessary to get their application to run in Terminal Server. This can mean that either the program will end up on the user’s desktop, or you’ll need to perform some workaround in Terminal Server to get the program to work. Easy accessibility to the necessary Terminal Server policies is key to ensuring that they’re followed.

  • In-house and custom-developed software management—This includes unit testing, acceptance testing, piloting, production promotion, and “emergency” fixes due to software bugs. The proper process for moving these applications into production goes hand in hand with the need to ensure that applications are developed to run in Terminal Server.

Specific policies and procedures for clients, servers, networks, and applications are discussed in the next few chapters, where I talk in more detail about implementation planning for each component.

The centralized manageability and homogenous environment introduced by Terminal Server makes it very suitable for defining policies and procedures that can be enforced effectively. Terminal Server brings the user’s desktop logically closer to the administrators of the environment, while still providing the required functionality to the user.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.95.244