Productivity-Tool Acquisition

Organizations that have random or casual methods of acquiring software tools waste about 50 percent of all the money they spend on tools. Worse, poor tool investments are associated with long schedules. Organizations that use formal acquisition strategies can drop their wastage to about 10 percent and avoid the associated schedule problems (Jones 1994).

image with no caption

Here are some common problems with acquisition tools:

image with no caption

For more on problems associated with tool acquisition, see Assessment and Control of Software Risks (Jones 1994).

  • The software tool market is prone to gimmickry and exaggerated claims.

  • Bad-tool acquisition precludes acquisition of more beneficial tools.

  • Thirty percent of acquired tools do not meet enough user needs to be effective.

  • Ten percent are never used after acquisition.

  • Twenty-five percent are used less than they could be because of lack of training.

  • Fifteen percent are seriously incompatible with existing tools and trigger some form of modification to fit the new tool into the intended environment.

The ultimate cost of a tool is only slightly related to its purchase price. Learning expense and efficiency gains or losses are much more important in determining the lifetime cost of a tool than is the purchase price.

Acquisition Plan

An organization that waits until it needs a tool to begin researching has waited too long. Tool evaluation and dissemination should be an ongoing activity.

Tools group

An effective, ongoing approach is to identify a person or group to be responsible for disseminating information about software tools. Depending on the size of the organization, that person or group can be assigned tools responsibilities either full- or part-time and should be responsible for the following activities:

image with no caption

Intelligence gathering. The tools group should stay abreast of developments in the tools market and literature related to tools—news reports, marketing materials, comparative reviews, anecdotal reports from tool users, online discussion threads, and so on.

Evaluation. The group should evaluate new tools as they become available. It should maintain a "recommended list" of tools for general use. It should track how well each tool works on large projects, small projects, short projects, long projects, and so on. Depending on the size of the organization, it might recommend tools to be used on pilot projects on a trial basis, monitoring those pilot projects to identify winners and losers.

The tools group should continue to evaluate new releases of tools that have earlier been found lacking. With less-formal evaluations, people sometimes develop mental blocks about tools they have had bad experiences with. They will shun version 5 of a tool because version 2 had more problems than their tool of choice—ignoring that versions 3, 4, and 5 of their tool have also had problems of their own.

Coordination. Different groups within an organization can all try new tools, but without coordination they might all try the same new tool. There might be six promising new tools and six groups to try them, but without coordination maybe only three of the six tools will be tried. Some of those groups might have been just as happy to try one of the three untried tools as the one they did try. The tools group should coordinate tool experimentation so that groups don't all learn the same lessons the hard way and so that the organization learns as much as it can as efficiently as it can.

Dissemination. The tools group should get the word out to people who need tool information. The group should maintain reports on different groups' experiences with each tool and make those reports available to other groups who are considering using a particular tool. If the organization is large enough, it could desktop publish an informal monthly or bimonthly tools newsletter that reports on the results of tool use on pilot projects and solicits groups to become pilots for tools under evaluation. It could facilitate informal communications among different tool users, perhaps by hosting monthly brown-bag lunch presentations on new tools or by moderating bulletin-board discussions.

Risks of setting up a tools group

There are several risks in setting up a centralized tools group, the worst being overcontrol. It's important that the tools group collect and disseminate information about effective tools; however, if the group is to support rapid development, it can't be allowed to calcify into a bureaucratic standards organization that insists that all groups use only the tools it has "approved."

The tools group should be set up as a service organization rather than as a standards organization. The tools group's job is to help those working on real projects to do their jobs better. The people working on the front lines of projects know best what they need. The tools people can make recommendations, provide advice, and lend support, but their judgments about which tools to use should not prevail over the judgments of the people who actually have to live with the tools.

The tools group needs to be staffed by people whose recommendations will be heard. If the group is prioritized low and staffed with cast-off developers, the group's recommendations may be ignored, possibly for good reason.

Selection Criteria

This section contains criteria to use in your tool acquisitions. You can use these criteria within the context of a standing tools group or for the evaluation of a specific tool for a specific project.

CROSS-REFERENCE

The considerations in selecting tools overlap with considerations in selecting outsourcing vendors. For those criteria, see Chapter 28, Chapter 28.

Estimated gain. Foremost on the list for a rapid-development project is to estimate the efficiency gain you expect to realize from the use of a particular tool. This is often difficult to measure, and for reasons spelled out in detail throughout this chapter, should be estimated conservatively.

A good rule of thumb is to assume that any vendor claim of more than 25-percent improvement in productivity per year from a single tool is either specious or false (Jones 1994). You can decide what to do with a vendor who makes such a claim; some people avoid dealing with them entirely.

Vendor stability. You stake your future on the future of the vendor who provides the tools. How long has the vendor company been in business? How stable are they? How committed are they to the specific tool of interest? Is the tool in the vendor's main line of business, or is it a sideline? Is the tool likely to be supported by another company if the current vendor goes out of business?

If you have concerns about the vendor's stability and you're still interested in the tool, you should consider what you will do if the vendor goes out of business. Do you need the tool for more than one project or version? Can you maintain the tool yourself? Will the vendor provide source code? If so, what is the quality level of that source code?

Quality. Depending on the kind of tool you're looking at, it could be that the quality of the vendor's tool will determine the quality of your program. If the vendor tool is buggy, your program will be buggy. If the vendor tool is slow, your program will be slow. Check under the hood before you jump on the vendor's bandwagon.

Maturity. A tool's maturity is often a good indication of both quality and vendor commitment. Some organizations refuse to buy version 1 of any tool from a new vendor—no matter how good it is reported to be—because there is too much risk of unknown quality and, regardless of the vendor's good intentions, unknown ability to stay in business. Some organizations follow the "version 3 rule."

Version 1 is often tantamount to prototyping code. The product focus is often unclear, and you can't be sure of the direction in which the vendor will take future versions of the tool. You might buy version 1 of a class library because of the vendor's all-out emphasis on performance, only to find that by version 3 the vendor has shifted its focus to portability (with terrible performance consequences).

image with no caption

The product direction in version 2 is usually more certain, but sometimes version 2 is just a bug-fix release or displays the second-system effect: the developers cram in all the features they wanted to include the first time, and product quality suffers. Product direction can still be unclear.

Version 3 is frequently the first really stable, usable version of a tool. By the time version 3 rolls around, the product focus is usually clear and the vendor has demonstrated that it has the stamina needed to continue development of its product.

Training time. Consider whether anyone who will use the tool has direct experience with the tool. Has anyone on your team attended a training class? How available are freelance programmers who know how to use the tool? How much productivity will you lose to the learning curve?

Applicability. Is the tool really applicable to your job, or will you have to force-fit it? Can you accept the trade-offs that a design-to-tools strategy will entail? It's fine to use a design-to-tools strategy. Just be sure to do it with your eyes open.

CROSS-REFERENCE

For more on design to tools, see Design-to-Tools, "Design to Tools."

Compatibility. Does the new tool work well with the tools you're already using? Does it restrict future tool choices?

Growth envelope. In addition to the direction in which you know you want your product to go, will the tool support the directions in which you might want your product to go?

It's the nature of a software project to expand beyond its original scope. I participated in one project in which the team had to choose between low-end, in-house database management software and a high-end commercial DBMS. The engineer who favored the commercial database manager pointed out that under heavy data loads the commercial DBMS was likely to be several times faster. The engineer who favored the in-house DBMS insisted that there was no processing-speed requirement, and so the performance consideration was irrelevant. By selecting the in-house DBMS instead of its more powerful competitor, the company didn't have to pay a licensing fee, so that's what they selected.

image with no caption

In the end, the company might not have paid a licensing fee to use the in-house DBMS, but it did pay, and it paid, and it paid some more. Although the in-house DBMS had operated reasonably fast with small test databases, under full data loads it inched along. Some common operations took more than 24 hours. In the face of performance this abysmal—surprise! surprise!—a performance requirement was added. By that time, thousands of lines of code had been written to use the in-house database software, and the company wound up having to rewrite its DBMS software.

CROSS-REFERENCE

For more on growth envelopes, see Define Families of Programs in Using Designing for Change.

Avoid selecting a tool that's only minimally sufficient for the job.

Customizing selection criteria. In defining a set of criteria to use in tool selection, be sure that you buy tools according to your own criteria rather than someone else's. It's fine to read comparative reviews to see what issues are involved in using a particular class of tool, because you often discover important issues that you hadn't thought about. But once you've seen what the issues are, decide for yourself what matters. Don't add criteria to your list solely because magazine reviewers have them on their lists. It's unlikely that even half the things mentioned in a typical magazine review will be of concern to any one of your individual projects.

Commitment

Once you've made the tool selection, commit to it. Don't keep looking over your shoulder and wondering if some other tool would have been better. Let the other tools go! As Larry O'Brien says, "When the project hits its first big bump, it's natural to worry that you're wasting effort. But every project and every tool has a first, big bump" (O'Brien 1995). You don't have to enjoy the bumps, but realize that you'll encounter at least a few no matter how good a job of tool-selection you've done. Switching tools mid-project just guarantees you'll have at least one more big bump.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.79.70