User Access to Applications

Making applications available to users is a major task for system administrators. Most users depend on reliable access to application software to get their jobs done. The demands of creating and maintaining user access to application software can easily consume a quarter or more of system administration time.

You may need to perform any or all of the following tasks to administer user access to applications.

  • Acquire software.

  • Locate space for the software.

  • Install the software on multiple local systems or on an NFS server.

  • Set up user environments, such as paths, links, and environment variables that are specific to each application.

  • Revise user environments each time the software version changes or new software is added.

Anything that you can do to leverage these tasks will increase productivity.

The first step to manage software access is to use the automounter to match users with the proper binary version of applications. After setting up the automounter, you need to perform an additional step to provide a complete solution to this problem.

The following paragraphs describe four alternative approaches to providing a complete solution. Each solution has its own particular caveat.

The first approach is to use scripts that are run once on each system to set up the user environment for an application. Subsequently, when the user starts the application, the environment is already properly prepared. A disadvantage to this approach is that it introduces additional command names that users must learn to prepare for running an application. An additional drawback is that some programs use the same environment variable names as other programs with different values. When users run a script for a specific application but do not start the application until later, other packages that use the same environment variable may be affected. (See “Wrappers and Dot Files” for an example.)

The second approach is to have user .login or .profile files “source” a global configuration file that sets up the user's environment.

The third approach is to use wrappers to manage access to software. Wrappers are tailored application startup scripts. These scripts set up the user's environment at runtime and start the application. Wrappers perform the setup that you otherwise would need to hardcode in individual users' dot files.

Using wrappers together with standard application server layouts and simplified user mount and path setups can produce an environment in which you need to do very little, if any, administrative maintenance of the end user environment. Users can have as few as one software access mount and one software access path component.

Most application access at Sun is based on this last approach, which was developed by Sun Information Resources. Sun recommends that you consider dedicating servers to provide access to application software over the network, in the manner proposed in the following sections.

NOTE

A comprehensive description of how to configure and manage application servers is beyond the scope of this book. However, the approaches and examples cited here provide you with a foundation based on sound principles and real-world experience.


The fourth approach is to create site initialization files that contain initialization information or wrappers for applications. See “Wrappers and Site Initialization Files” for more information.


Automating Your Application Environment

The information in the following sections provides suggestions for ways that you can automate your application environment. The key technologies and techniques are introduced in Table 67 and described in the following sections.

Table 67. Key Elements for an Application Server.
Element Used to
NFS/automounter Share application file systems across the network; guarantee consistency and integrity with read-only access.
Veritas Volume Manager or File System or Online: DiskSuite™ Permit file systems larger than individual disks; enable a single mount to access a huge distribution.
Wrappers Remove setup requirements from the end-user environment; provide all users with consistent behavior.
Site initialization files Centrally manage wrappers and application setup scripts; provide all users with consistent behavior.
Symbolic links or hard links (note that symbolic links can cross file system boundaries) Enable one executable to have many startup names; permit generic path references to version-named locations; control default application versions.
Common command directory Make all commands accessible with a single path component.
rdist command Facilitate replication of file systems across application servers.
NIS/NIS+ nameservice Facilitate sharing files in a network environment.

When you set up an application server, you dedicate a single volume to contain the applications and wrappers. You create two (or more) directories in the volume. The application directory contains the applications and wrappers, as well as a symbolic link directory that you can use to determine the default version of the application. The common command directory contains symbolic links in the form of command names that link to the wrappers for each application. You can use a product such as Veritas Volume Manager or Sun Online: DiskSuite to create a large file system that spans more than one slice or disk.

Veritas Volume Manager provides the following capabilities, although you still write generic Solaris UFS file systems on top of these volumes.


  • Enables you to mirror your OS boot disk in case it fails.

  • Enables you to slice up or combine physical disks to create “virtual” disks called volumes. You can treat these volumes as though they are physical disks and can write file systems on them.

  • Enables you to mirror data on a volume so that failure of one physical disk that is part of the volume does not interrupt service.

  • Enables you to dynamically grow volumes on the fly without any downtime.

Veritas File System provides the following capabilities, which complement those of Volume Manager.

  • Enables you to create a VXFS file system, in addition to UFS, which can be written on a Veritas Volume.

  • Enables you to grow and shrink VXFS file systems on the fly, with no downtime.

  • Provides two- to fourfold performance improvement over UFS.

  • Provides a much faster fsck after system crash. On a VXFS file system, fsck takes only a few seconds, regardless of the size of the file system. On a large UFS file system, fsck could take hours.

  • Provides plug-ins to the VXFS file system that increase performance by bypassing all kernel buffering for applications like NFS file service and databases for which kernel buffering makes no sense.

  • Enables you to take a snapshot of a production file system, mount it at another location, and back it up while the production file system remains active

Online: DiskSuite provides similar capabilities.

When you have installed the application packages, you write a wrapper that sets up the environment for the application. If you want to copy the setup to another server, you can do so with the rdist command. Refer to “Designing an Application Server” for a detailed description of these tasks.

Benefits of a Standardized Application Server Setup

The information in the following sections describe the administrative benefits that you gain from a standardized application server setup.

Use NFS

Installing the same application for multiple users on local disks uses extra disk space, requires more time, and becomes a support burden. You must perform upgrades at multiple locations. When problems arise, you may have to deal with multiple versions of the same application.

When you provide an NFS-shared installation, you reduce local disk installations. You save time by reducing the number of systems that you must support. When multiple users share access to a single read-only copy of an application, you perform fewer installations and upgrades and simplify troubleshooting by ensuring that users are executing the same code.

Consolidate Your Installations

Even NFS-shared applications can be difficult to maintain if they are scattered among too many locations. Sometimes applications have been installed on a user's system or on a server.As demand for the application develops, users share it from the original location. Users frequently pass the word to other users about where they can mount the application. In such a situation, users may draw on inconsistent or unreliable sources and experience confusion regarding where they should get applications.

To solve this problem, designate dedicated application servers. Sharing all standard applications from the same server offers users a reliable source and lets you keep track of where maintenance is needed.

Standardize Server Layouts

Your environment, like that at Sun, may require many application servers to service different networks, buildings, and regions. If so, commit to using the same file system layout on all application servers. Although the contents of different application servers may vary from one server to another, the locations of individual applications should be consistent. A unified file system naming scheme simplifies user paths and reduces the updates required when users move and must change from one application server to another. This approach also simplifies the process of copying (distributing) applications from a master installation server to production servers, because the destination file system is the same.

Sometimes in comparing two locations where a product has been installed, you cannot tell whether the contents of like-named directories are intended to be the same or different; you have no outward clue. Sun recommends that you install applications in directories with names that identify both the product and the version. That practice lets you and others know what the directories contain. In addition, you can maintain multiple versions of an application at the same directory level.

In some environments, you must perform maintenance at numerous locations for each change. Using wrappers and a common command directory reduces the number of locations where attention is needed, limits them to servers, and leverages the results for all users.

Synchronize Version Cutovers

In the traditional UNIX environment, you may find it difficult to convert to a new application version quickly because of the number of changes to the user environment that may be required. Using symbolic links to control all the versioning at this level, and using wrappers that immediately provide any necessary user setups can help to speed up and synchronize cutovers. It can be difficult to know who is using particular applications or whether some applications are being used at all.

Wrappers can increase usage visibility if you code them to report to a central location by e-mail each time the user starts a product.

Benefits of a Standardized User Environment

The information in these sections describes the administrative benefits you gain from a standardized user environment setup.

Simplified User Mount Points

When users access applications from a variety of locations or even from multiple file systems on a dedicated server, they need a variety of mount points. You, as system administrator, probably have to maintain the information that supports these mounts. Regardless of whether you perform this maintenance on individual user systems or with automounter maps, the fewer times you need to update the user environment, the more time you save.

Simplified User Path Requirements

When you configure dedicated application servers so that all applications are accessible from a single file system, users need only one mount point, which may not need to be updated. Even when the contents of the file system that users are mounting change, the mount point remains the same.

Maintaining path updates for users can be an unnecessary burden. If users have the “right” path, you do not need to change it. The right path is one whose standard component(s) provide ongoing access to all applications.

Reduced Runtime Conflicts

The settings that some applications need at runtime may be in conflict with those needed by others. Wrappers tailor one process environment without affecting others.

Simplified User Relocations

User moves can impose a tremendous burden, because many user setups in a nonstandard environment are customized. Using wrappers and simplified user mount points and paths can drastically reduce the updates required to reinstate application service after a move. In some cases, you need change only the server name for the user's mount. Alternatively, you can let the automounter decide the server name based on the network topology. Refer to Part 3 for information about the automounter.

Using Wrapper Technology

Wrappers are custom startup scripts for applications, and have been used for quite some time. Many application vendors, such as Frame Technology, use wrappers to tailor their application startup.

Vendors cannot, however, anticipate the full range of startup decisions and settings that are needed in every customer environment. You can add value by developing wrappers that are truly customized to your own end-user environment. It may be worth writing your own wrapper—even to serve as a front end to a vendor-designed application wrapper. Wrappers can leverage your system administration expertise and hard-won knowledge of the application requirements in a consistent way, to the benefit of all your users.

Wrappers and Dot Files

Ordinarily, user dot files (for example, .login and .cshrc for the C shell or .profile for the Bourne and Korn shells) try to provide for what users may do after they log in. The goal is to define a comprehensive environment that supports all requests to access applications. It is not only difficult, but in some cases impossible, to provide for all cases: Some applications need a different value for an environment variable than do other applications that use the same variable name.

For example, to run a given Sybase application, users may need to set the DSQUERY variable to identify the back-end database server for the application. If this variable is set from dot files at login time, it extends throughout subsequent shell environments. However, other Sybase applications may require different DSQUERY values. If, instead, you write a wrapper for each Sybase application, each wrapper can set DSQUERY to the value needed for the application that is associated with it.

When you use wrappers, the environment for each application is set up as needed. Wrappers construct the needed environment at runtime, before executing the application. In addition, the settings are visible to the resulting application process only; they do not interact with the rest of the user's environment. This encapsulation of runtime environment is a significant advantage of wrappers.

Likewise, users' paths frequently must be updated as applications come and go, in an effort to provide for what the user may decide to run.

Consider this analogy: In a given year, you plan to go running, hiking, skating, scuba diving, and snow skiing. (Forget for a moment that, as a system administrator, you're too busy.) Doesn't it seem more practical to don the special equipment for each activity just before you need it (and take it off when you're done), rather than trying to put it all on at the beginning of the year “just so you'll be ready”? Clearly, the latter approach can generate conflicts. And in choosing where to go skiing, for instance, you probably would prefer to choose your destination based on where the snow is at the time you are ready to go.

Wrappers and Site Initialization Files

You can incorporate wrappers in site initialization files. Site initialization files locate initialization files centrally and distribute them globally. With site initialization files, you can continue to introduce new functionality to the user's work environment and also enable the user to customize individual user initialization files.

You create a site initialization file and add a reference to it in each user's initialization file. When you reference a site initialization file in a user initialization file, all updates to the site initialization file are automatically reflected when the user logs in to the system or when a user starts a new shell.

You can do any customization in a site initialization file that you can do in a user initialization file. Site initialization files typically reside on a server or a set of servers and appear as the first statement in a user initialization file. Each site initialization file must be the same type of shell script as the user initialization file that references it. In other words, you must write two versions of the site initialization file and keep them in sync. One version would handle users of the C shell or tcsh, the other version would handle users of the Bourne/Korn/Bash shells. It can, however, be a challenge to keep these two versions in sync.

To reference a site initialization file for a C shell user initialization file, put a line similar to the following example at the beginning of each user's .cshrc initialization file.

source /net/machine-name/export/site-files/site-init-file

To reference a site initialization file in a Bourne or Korn shell user initialization file, put a line similar to the following example at the beginning of each user's .profile initialization file.

. /net/machine-name/export/site-files/site-init-file

Example of a Site Initialization File

The following example shows a C shell site initialization file named site.login in which a user can choose a particular version of an application.

# @(#)site.login
main:
echo "Application Environment Selection"
echo ""
echo "1. Application, Version 1"
echo "2. Application, Version 2"
echo ""
echo -n "Type 1 or 2 and press Return to set your
application environment: "
set choice = $<
if ( $choice !~ [1-2] ) then
goto main
endif
switch ($choice)
case "1":
setenv APPHOME /opt/app-v.1
breaksw
case "2":
setenv APPHOME /opt/app-v.2
endsw

You would reference the site.login site initialization file located on a server named server2 in the user's .cshrc file (C shell users only) with the following line. The automounter must be running on the user's system.

source /net/server2/site-init-files/site.login

Additional Wrapper Advantages

With wrappers, you can provide sensible default values for variables while still allowing users the option to override those settings. You can automate user installation steps that some applications require when first run and know that you are producing consistent results. You can also generate usage information about the application.

Wrapper Overhead and Costs

Some administrators question whether the merits of a wrapper approach justify the overhead imposed each time an application starts up. After all, an additional shell script runs ahead of the normal application startup. Several years of experience with complex wrappers at Sun have shown that the delay in startup time is trivial and the benefits overwhelming.

The biggest cost to consider is the flip side of the greatest benefit—wrappers are powerful, so they require care. Wrappers present consistent behavior to large numbers of users. If wrappers are well produced and maintained, they deliver gratifyingly reliable service and prevent many problems. On the other hand, if wrappers are broken, service to large numbers of users may be impacted.

Introduction of Wrappers into an Existing Environment

One of the great advantages of wrappers is that you can introduce them immediately into almost any application environment. Once you develop a wrapper for a given application, if the command names that link to it are installed in a location already in the users' paths (for example, /usr/local/bin), you can make the application immediately available without needing to do anything to set up the user environment.

To provide a limited implementation, you can decide how many wrappers you want to provide, and for which applications. You can write wrappers as you add new packages, and you can write wrappers for older applications as well. You can create links to the wrappers in a directory already in the users' paths. Alternatively, you can create a new directory that contains the links to the wrappers.

The following tasks are required in setting up a limited implementation of an application server with wrappers.

  • Installing packages by using vendor instructions.

  • Creating wrappers for applications, to eliminate or minimize any requirement for hard-coded setup by individual users.

  • Creating all application command names as symbolic links in a directory that is already on the users' paths (or in a new directory to be added to their paths).

  • Creating symbolic links to point to the application wrapper.

Designing an Application Server

To provide a complete implementation of these techniques throughout an environment, you perform the following tasks on the server.

  • Identifying servers to specialize in providing application access.

  • Implementing the fewest possible slices (partitions) to contain the software packages.

  • Performing software installations on these servers in a consistent file system layout.

  • Sharing the application server file system read-only to users if possible. Some applications must write into their library areas (although it is bad practice) and require their application area to be readable and writable by at least a group.

  • Naming package directories in a way that reflects both the application name and the version.

  • Installing packages initially per vendor instructions and then (if necessary) adjusting them to simplify and encapsulate their structure.

  • Creating wrappers for applications, to eliminate or minimize any requirement for hard-coded setup on the part of individual users.

  • If you use site initialization files, creating or modifying the appropriate site initialization file.

  • Creating all application command names as symbolic links in a common directory, and creating symbolic links to point to the application wrapper.

  • As applications are added to a server, using the rdist command to update other servers that mirror this central application server's applications.

  • Separating servers for network services (NIS/NIS+, DNS, NTP, mail, and so on) from application servers in all but the smallest environments.

You perform the following tasks in the user environment.

  • Setting up users with the appropriate mount point and mount to access the application server.

  • Setting up users with a path that includes the common command directory.

  • If you use site initialization files, adding the reference to the appropriate server site initialization file to users' initialization files.

The following sections describe in greater detail the basic tasks involved in a general implementation. However, coverage of many topics necessarily is superficial and the overall model is simplified.

Server Configuration

Consider the following points when designating servers to act as application servers.

  • Choose server configurations that you believe to be robust. Consolidating applications into one location simplifies life only to the extent that the system provides ongoing, reliable service. Typically, when application service is down, users are down.

  • Choose servers that can retain their identities for reasonable lengths of time. Host name changes require mount maintenance, and host ID changes can make licensed passwords obsolete.

Alternatively, you can use DNS CNAMEs (aliases) for each service (for example, dns.domainname, nisplus.domainname, apps.domainname). If you want to migrate the service to another host, you can move it while keeping the original host up. For a brief time, two hosts are running the same application. Change the DNS CNAME and wait for all clients to roll over to the new host. Then, retire the old host server.


User Capacity

It is impossible to offer specific guidelines concerning the number of servers you will require. Your goal is to provide reasonable NFS response time to all clients served. The user ratio you can support depends on many factors, such as the server characteristics, network characteristics, the types of applications being served, and the number of clients.

Try to locate application servers on the same network segments as the bulk of their clients. As a rule, you obtain the best response if you minimize NFS traffic through routers and other store-and-forward network devices.

Automounter maps for application directories are especially useful when applications are moved from one server to another.

For example, with the automounter, you could change application servers with the following steps.


1.
Create a new application server and load the new software.

2.
Update the NIS/NIS+ automounter map with the information for the new application server.

3.
Wait for all automounter clients to unmount the old application server, then start mounting these file systems from the new application server.

Wait a week or so for those users who run applications such as FrameMaker for days at a time, thus holding down the mount point.

4.
Retire the old application server.

Compatible Services

It is probably simplest to dedicate a server exclusively as an application server. If, however, it is impossible or impractical to do so in your environment, you may need to implement a multipurpose server. Note that a multipurpose server is a bad practice because it provides a single point of failure.

Certain services present little conflict with NFS service because of their lightweight or typical scheduling. Examples include DNS or NIS/NIS+ servers. Additional, nonapplication NFS roles, such as sharing client root or home directory file systems, may have some impact on application response time.

NOTE

For its role as an NFS server, a platform need not be typical of the user base platforms. However, if an application server is also to act as a license server, it must be capable of running the license support binaries provided by the application vendors.


Other functions are incompatible with optimum NFS performance because they make heavy CPU and I/O demands. Examples of incompatible functions include back-end database engines, development activity such as compiling and debugging, and routing.

Disk Allocation

You need to allocate adequate space for applications on the server, allowing ample space to accommodate future additions. Also remember that you may need space for multiple versions of some applications as you transition users to newer versions.

As noted earlier, you want to serve applications from a single file system to minimize user mounts. If your overall application space requirements exceed the size of your largest disk, you may want to use the Veritas Volume

Manager or Online: DiskSuite product. These products let you concatenate (group together) multiple physical disk slices into one logical metaslice. They also offer other performance and high-availability enhancements, such as striping, mirroring and “hot spare” capability.

Contrary to system defaults, put /, /usr, and /opt into a single partition large enough to hold dozens or hundreds of patches as well as providing enough room for an OS upgrade.

File System Configuration

The following sections suggest a basic file system configuration for application servers. When you create one or more application servers, you generally provide a single file system with a consistent directory hierarchy. In that way, you create an environment that is consistent throughout your organization.

Base Directories

When you have a server with a disk slice (partition) that you consider adequate for long-term use as an application server, you can begin to implement the file system itself. As a foundation, Sun recommends that you create a minimum of two standard directories, which, in this model, we name /usr/apps/exe and /usr/apps/pkgs.

You install symbolic links or wrappers that represent all the available commands used to execute applications in /usr/apps/exe, the common command directory. You install all of the applications that the symbolic links and wrappers point to in the /usr/apps/pkgs directory.

Parallel Hierarchies

You may want to create one or more parallel file systems. For example, you might want to make a distinction between packages implemented by central administration and packages introduced by regional administration. You also might want to distinguish between production and beta versions of software.

If you want to create such parallel hierarchies, you could designate them as follows.

/usr/apps/local/pkgs
/usr/apps/local/exe

The /usr/apps/local/pkgs directory contains the applications, and the corresponding /usr/apps/local/exe directory contains the symbolic links to the wrappers for those applications. Under this type of arrangement, you need to add a second path (/usr/apps/local/exe) to the users' environment. If you arrange the directory as a parallel hierarchy under a single file system instead of a separate file system, you can use a single mount point. If you create a separate file system, users need to have a second mount point.

Clearly there are more variations not presented here. It is important for you to determine your needs. Try to plan for the long term, and try to keep your setup as simple and as consistent as possible. At Sun it has, indeed, been possible to provide most application services through a single mount.

Transitory Names

If you use wrappers, avoid the temptation to create a file system with directories that are named after architectures or other transitory distinctions; for example, /usr/apps/sun4u. Packages are always present, but other distinctions come and go. Confine file system distinctions to individual application directories (which come and go themselves) where the changes impact only the wrapper.

Permissions

Unless you have good reasons not to do so, permissions should be mode 755 for directories you create and for those within applications, so that they are writable by owner, with read and execute for group and world. Sometimes vendors ship nonwritable directories that interfere with your ability to transfer the contents to another system. In general, make other files writable by owner and readable by all, and leave execute permissions intact. You can use the following commands to change a directory hierarchy to the recommended permissions.

The arguments to the find -perm option must be octal only. -perm takes two kinds of arguments: The first lists an absolute octal permission (for example -perm 0777), so it locates any file or directory that has those exact permissions. The second lists an octal permission preceded by a minus sign that finds any file or directory that has the mentioned permission bits turned on, even if other permission bits are turned on too (for example, -perm -0020 matches any file or directory that has the group write bit turned on).


Use the following command to change permissions only on directories that do NOT have the setuid, setgid, or sticky bits set.


/usr/bin/find directory-name -type d ! ( -perm -4000 -o -perm -2000 -o -perm
 -1000 ) -exec /usr/bin/chmod 755 {} ;

Use the following command to find all files that are not writable by their owners and make them writable.


/usr/bin/find directory-name -type f ! -perm -0200 -exec /usr/bin/chmod u+w {} ;

Use the following command to find all files that are not readable by user, group and other, and make them readable by all three.


/usr/bin/find directory-name -type f ! ( -perm -0400 -perm -0040 -perm -0004 )
 -exec /usr/bin/chmod ugo+r {} ;

Ownership

If you set up or maintain an extensive network of application servers and update them using trusted host relationships, consider what account should own the software distribution. In general, you do not need to have root be the owner. You may find some security advantages to creating a special, nonprivileged ownership account for managing application servers.

File System Sharing

Before users can access files on the application server, you must share (export) the file system to make it available to other systems on the network. Sun strongly recommends that you share the application's file system read-only.

Use the following steps on the application server to share the file system.

1.
Become superuser.

2.
Edit the /etc/dfs/dfstab file and add the following line.

										share -F nfs -o ro
										pathname
									

3.
Type share pathname (or shareall) and press Return.

In the following example, the path name /usr/apps is shared.

oak% su
Password
# vi /etc/dfs/dfstab
[Add the following line]
share -F nfs -o ro /usr/apps 
[Quit the file and save the changes]
# share /usr/apps
#

Edit the /etc/init.d/nfs.server file and increase the /usr/lib/nfsd -a 16 value—which represents the number of potential threads within a single nfsd program—to anything up to 512 to improve NFS performance. For an application server, 16 is inadequate.


In the following example, the value for starting /usr/lib/nfsd in the /etc/init.d.nfs.server file is changed to 256.

        # If /etc/rmmount.conf exists and contains share commands
        # then start up mountd and nfsd

        if [ $startnfsd -eq 0 -a -f /etc/rmmount.conf ] && 
            /usr/bin/grep '^[   ]*share' 
            /etc/rmmount.conf > /dev/null 2>&1; then
                startnfsd=1
        fi

        if [ $startnfsd -ne 0 ]; then
                /usr/lib/nfs/mountd
                /usr/lib/nfs/nfsd -a 256
        elif [ ! -n "$_INIT_RUN_LEVEL" ]; then
                echo "NFS service was not started because" 
                        "/etc/dfs/dfstab has no entries."
        fi

If you must start the NFS service manually, use the following steps. Otherwise, the services start up at boot time.

1.
Type /usr/lib/nfs/nfsd 64 and press Return.

You have started the NFS daemons.

2.
Type /usr/lib/nfs/mountd and press Return.

You have started the mount daemon.

3.
Type share -F nfs -o ro pathname and press Return.

pathname is the name of the mount point file system. For example, if you have mounted the partition as /usr/apps, type share -F nfs -o ro /usr/apps and press Return.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.214.32