6.1. Planning Your Mass Deployment

The first step in preparing to roll out a large number of systems (after you read this chapter, of course) is to sit down and make a checklist or matrix in the form of a spreadsheet. Include every task required to set up a new computer, listing each in the order your personnel must perform it. The items should be: binding to the directory service, creating local administrative accounts, setting preferences, locking down permissions, installing software, installing updates, and whatever other specific procedures apply to your environment. Remember that mass deployment is sometimes one big hack, and therefore needs to be documented for your predecessor and yourself six months from now.

Remember to solicit content for this deployment matrix from your end users. After all, your institutions primary purpose for having computers is to serve your end users and not your IT staff. This list should be ever growing and should be linked in some way to your trouble ticket tracking system. If you consider imaging not to simply be a one-time process, but to be an integral workflow in supporting machines, you can use it to track problems that can be circumvented or mitigated simply by added preflight stages to your imaging process. For instance, if you see that 20 percent of your helpdesk tickets relate to improper mail client configuration, perhaps your image should include one of the many automatic setup scripts available for common mail clients such as Microsoft Entourage (Outlook).

The tasks that systems require may depend on factors, such as the department they're in and what they're used for. Next, you need to determine which tasks you'll carry out on which systems. If the steps are the same for all machines, then following these two steps will likely be the easiest approach: Initially, simply deploy a large, monolithic image to every system. A monolithic image is simply an image of an entire system, including applications, operating, and other requirements. Follow that with other tasks (carried out by scripts, package installers, or both) that you couldn't include in the image—for example, Active Directory binding, which must run at first boot. While granularity is normally an IT person's best friend, keeping things as simple as possible can also be an important mantra with mass deployment. Much like lawyers are coached never to ask a question they don't know the answer to, never include an imaging step either pre- or post flight that you cannot guarantee through testing will work with all the variables of your infrastructure. Imaging should serve to simplify your IT infrastructure, not complicate it. A good example of this was a deployment performed at a large institution. The on-site IT staff pushed out a non-universal copy of their antivirus software, which caused startup issues on older PowerPC machines. If their imaging testing would have been performed on both architectures this would have been caught. The technician in charge of this deployment used a newer Intel machine for testing. Due to this, it's extremely important you have a cross section of hardware to test with what matches your organization's current computer inventory.

If procedures must differ for different parts of your organization, make sure to account for the specific differences in your matrix. Split your tasks (like preferences to be set, bindings, and software to be installed) into two categories as you can see in Table 6-1. The first should be the lowest-common-denominator tasks (and software, if required) that pertain to every single computer. Examples include operating-system installation, binding the OS into Active Directory (or other directory service), and other global tasks. This is sometimes best framed as a timeline, from start to finish. This timeline would separate the takes that you would perform manually if you were asked to setup a new employee's workstation.

Tasks to put in the second category are those that involve taking the groups of computers and users in your checklist and making them correspond to constructed users and groups in your global directory service. Based on this checklist, you now have an object-oriented model for who gets what items in your environment. This will serve as the blueprint for your deployment system.

Table 6.1. Object-Oriented Tasks
Global TasksPackages
Enable FileVault Master PasswordInstall Adobe Suite
Install Mac OS XSetup for VPN Access
Install Microsoft OfficeFill Bookmarks for HR Dept.
Setup 802.1xAdd Server Admin Tools
Add Hidden Local Admin AccountAdd Citrix Client
Bind to Active DirectoryInstall iWork
Bind to Open DirectorySetup iChat

If you are an ITIL-based IT shop, then you likely already have a repository of all "supported" applications in the form of a Definitive Software Library (DSL). In this case, you will take the supported applications from your DSL and place them into one of the two columns.

NOTE

Once you know who gets what software, try to get volume-license keys from every vendor. Sometimes the cost can be prohibitive, but you really want to try even if your automation choices become very limited or if you have to install a unique key for even a single software package. Also, be aware that smaller software packages may still require activation even with volume license keys. Every vendor is different. Software that is not as widely deployed may have serious design considerations if the vendor does not officially support mass deployment. Always test your images and processes on at least two systems to see how your software will handle being moved between different machines, and if necessary in your environment, hardware platforms. Some software registration systems utilize machine specific data, such as Mac address or other hardware information. In the event that software registration cannot be so easily baked into your image or package, you may need to utilize post-flight scripting to accomplish your task.

6.1.1. Monolithic vs. Package-Based Imaging

Mac OS X mass deployment is sometimes the subject of much debate. One of the leading topics in this debate is whether monolithic or package-based installations are the preferred methodology. This set of authors would like to put this to rest and say both are preferred in all environments time permitting. The question then becomes more of a matter of workflow order rather than the headlining technology. Monolithic installations can simply be the end result of package-based installs, where package-based installs are just the steps of monolithic installs split up into different file sets. That said, the preferred methodology is typically to always start with packages and then build monolithic images from the resultant packages depending on their size. In this way, you can add and remove items as needed, without the same rebuild time that starting anew would require. If your end result is a large monolithic image, then larger datasets can be deployed as one stream of multicast data rather than independent package installs via unicast. An example would be a package installer in excess of 50GBs, such as one of Apple's Pro Applications. While a single package installer would allow you to easily remove or update this in your image, including this much data in your "base" monolithic will increase deployments speeds for a number of reasons. If your network supports multicast, you would be able to push the image to an arbitrary number of computers via a single stream of data. If you have an image in excess of 50GBs to be deployed to more than a dozen computers, this can mean big savings in network bandwidth and deployment speed. Multicast deployment of packages is not a capability available to any of the most popular deployment systems. In this regard, creating a large base image can result in a significant yet more efficient deployment, rather than have post-flight installers run on each system independently. Each technique has its own merits, but when it comes right down to it nearly every deployment will benefit from a mixture of the two.

While it can seem contradictory given the ease of creating an initial monolithic image, after a few years of imaging, it seems like everyone ends up learning that pushing out images monolithically is typically more time-consuming than breaking that same image up into parts. In package-based imaging, you put down a very sparse "base" image, which could even be a bare-metal image containing nothing except for a Mac OS X install which has never been booted (such as what is configured from the factory on a new machine), then perform post-flight tasks to add the rest of the software and do the configuration.

With the purely monolithic technique, each time you go to build a new image, you may have to start from a clean OS installation then perform a certain series of tasks on the system before making the image of it. If you have multiple architectures in a deployment (like, PowerPC and Intel), you could find yourself carrying out the procedure once for every architecture. This redundant work compounds if you have different departments that receive different software, thus causing you to create more and more images. With each equipment refresh or major update to push to clients, you might need to create a new image. Additionally, due to what is typically lack of documentation, if your original image builder leaves, you often have no idea which changes, scripts, and software was originally included in your image without back tracking forensics.

Why would anyone use a single large image? Well, for one, it's pretty easy to do. In fact, for most simple environments, it's far easier than breaking that image into parts in relation to preparation time. For example, if you want all the computers you deploy to have the same configuration; you can embed that into the computer from which you'll create an image. For example, click a button which creates a preference, rather than create an installer which installs that preference. Then, when you push that image out, the setting is there. Later, if you want to change the setting, you can send a script to do so, either through Apple Remote Desktop (ARD) or as an imaging task for subsequent sets of imaged computers. At that point, however, you're going to have to figure out which files were created by that change or, better yet, how to do this programmatically (through a script) so you don't mess up other settings along the way.

As you get more granular with your packages and scripts, you may end up using an automation of some sort to alter each system-preference pane, configuration file, application, serial number, and anything else you can think of that you do to each new machine. That automation may consist of a managed-preference procedure (discussed in Chapter 7), a script, or a package. It's not uncommon to have 100 tasks to perform on a system, post-imaging, but getting to that point can be time consuming. In the long run, a truly package-based imaging system offers the most systems-management flexibility.

NOTE

While it may end up more work for some environments to build a number of scripts or packages to automate your deployment, it's a great learning experience if you have time and will aid in the ongoing imaging process as you have new machines and new operating systems (and builds) to redeploy under.

The monolithic image approach for an imaging environment as described in Table 6-1 would then result in a solution similar to Figure 6-1, with packages deployed post installation.

Figure 6.1. Workflow for monolithic imaging

Taking the imaging workflow to a more package based approach would then result in a workflow more similar to Figure 6-2, where we take things into more of the object- oriented realm.

Figure 6.2. Package-based imaging

As we've indicated, on the outside, Figure 6-2 will seem like more work. However, when you introduce change into your environment then the larger the environment the less work this will inevitably be.

6.1.2. Automation

The more computers you deploy, the more you'll want to automate the setup process. If you have to bind 25 machines into Active Directory and each takes roughly 5 minutes, you'll dedicate about 2 hours—not too bad. But if you have 1,000 systems, we're talking about 83 hours. In that case, though writing a script to automate the process may consume 5 hours, you've saved 78 hours. On the other hand, for just 25 computers, writing a script wouldn't seem to make sense, since you'd spend an extra 3 hours. Except, if those 25 systems ever need reimaging, the work you did to automate the process will have paid off. An often overlooked component of this type of work is the massive amount of scripts that are already currently available. Like many other IT professionals, the authors of this book often publish their scripts online in publicly accessible forums. With this said, when estimating time to create a script such as one used for Active Directory binding, always research to see if one is already available freely from some other source. This small amount of forethought may even mitigate all development time if the script does exactly what you need it to do, and if not it may be easier to start with an example then from scratch.

Refer to your checklist to decide which tasks you'll automate. Generally, you perform automation one of two ways: using packages (thus the term package-based imaging) or scripts. Packages are installers; scripts can also "install" items, but most often, you use them in the deployment process simply to augment or transform existing data. This line gets blurred a bit in the regard that packages can be "payload" free, meaning that they can be created with the express purpose of running scripts. Wrapping your final scripts in a package installer has huge advantages, as Apples package installer infrastructure includes many different components, such as pre- and post flight scripts, sanity checks for memory, system version, as well as graphical installer bundles, which mean you can even put a basic user interface "on top" of your script to help with the uninitiated.

Later in the section "InstallEase and Iceberg" of this chapter, we'll cover package-creation more thoroughly. But for now, take a good look at your checklist. Some software comes in the form of a package installer that you can use for deploying the software. If you do use existing package installers, budget a couple of hours for testing each. If you can't use an existing package, then you can either create a new one or you can write a script to place all of the files, or even parts of files, in their appropriate locations.

NOTE

As experienced scripters (and managers of those who script), take our word for this: When you get a budget estimate for writing a particular script, just double it. This will save you a lot of grief down the road.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.23.30