Estimate Refinement

One question that managers and customers ask is, "If I give you another week to work on your estimate, can you refine it so that it contains less uncertainty?" That's a reasonable request, but unfortunately it's not possible to act on it the way you'd like to. Research by Luiz Laranjeira suggests that the accuracy of the software estimate depends on the level of refinement of the software's definition (Laranjeira 1990). The more refined the definition, the more accurate the estimate. This makes sense intuitively because the more the system is nailed down, the less uncertainty feeds into the estimate.

The implication of Laranjeira's research is that the work you have to do to refine the software definition is the work of the software project itself: requirements specification, product design, and detailed design. It is simply not possible to know a schedule with ±10 percent precision at the requirements-specification stage. You can control it to come out on the low side, but if you just let the project flow wherever it wants to go, you'll get no better than +50 percent, –33 percent precision.

It is possible to refine a project estimate as the project progresses, and you should do that. The typical process that people follow is to allow themselves to be forced into making a single-point estimate early on and then be held accountable for it. For example, suppose a team lead provides the set of estimates listed in Table 8-11 over the course of a project.

Table 8-11. Example of a Single-Point–Estimation History

Point in Project

Estimate (man-months)

Initial product concept

100

Approved product concept

100

Requirements specification

135

Product design specification

145

Detailed design specification

160

Final

170

When a team lead uses this single-point approach, the customer will consider the project to have slipped over budget and behind schedule the first time the estimate increases—when it increases from 100 to 135 man-months. After that, the project will be considered to be slipping into ever more trouble. That's absurd because not enough was known about the project when the 100-man-month estimate was made to create a meaningful estimate. The final tally of 170 man-months might actually represent excellent efficiency.

image with no caption

Contrast that scenario with one in which the team lead provides estimates in ranges that become narrower as the project progresses, such as those shown in Table 8-12.

Table 8-12. Example of a Range-Estimation History

Point in Project

Estimate (man-months)

Initial product concept

25–400

Approved product concept

50–200

Requirements specification

90–200

Product design specification

120–180

Detailed design specification

145–180

Final

170

These estimates contain a tremendous amount of imprecision, and all but the most sophisticated customers will try to get you to narrow the ranges. But it isn't possible to provide more precision than that shown in Table 8-12—it's only possible to lie about it or not to know any better. The imprecision isn't a sign of a bad estimate; it's part of the nature of software development. Failure to acknowledge imprecision is a sign of a bad estimate.

In the case illustrated in Table 8-12, as the team lead refines each estimate, management or the customer will consider the project to be staying within their expectations. Rather than losing the customer's confidence by taking one schedule slip after another, the team lead builds confidence by refusing to provide more precision than is reasonable and by consistently meeting the customer's expectations.

The number of times you revise an estimate can affect whether it is accepted. If you explain the estimation story ahead of time and promise to provide increasingly refined estimates at regular milestones such as those contained in Table 8-12, that will make for an orderly, respectable process.

Recalibration

Suppose that you have a 6-month schedule. You planned to meet your first milestone in 4 weeks, but it actually takes you 5 weeks. When you miss a scheduled date, there is a question about how to recalibrate the schedule. Should you:

  1. Assume you can make up the lost week later in the schedule?

  2. Add the week to the total schedule?

  3. Multiply the whole schedule by the magnitude of the slip, in this case by 25 percent?

The most common approach is #1. The reasoning typically goes like this: "Requirements took a little longer than we expected, but now they're solid, so we're bound to save time later. We'll make up the shortfall during coding and testing."

image with no caption

A 1991 survey of more than 300 projects found that projects hardly ever make up lost time—they tend to get further behind (van Genuchten 1991). That eliminates option #1.

image with no caption

Estimation errors tend to be inaccurate for systemic reasons that pervade the whole schedule, such as pressure from management to use optimistic assumptions. It's unlikely that the whole schedule is accurate except for the part that you've had real experience with. So that eliminates option #2.

With rare exception, the correct response to a missed milestone is option #3. That option makes the most sense analytically. It also matches my experience the best. If you aren't ready to extend the schedule by that amount, and people often aren't, then you can delay your decision and get more data by monitoring how you do in meeting the second milestone. But if you're still off by 25 percent in meeting the second milestone, it will be harder for you to take corrective action, and the corrective action you do take will have less time to work than it would if you had taken corrective action at the first available opportunity.

Changing the estimate after missing or beating a milestone isn't the only option, of course. You can change the "product" corner of the schedule/product/cost triangle, or you can change the "cost" corner. You can change the spec to save time. You can spend more money. The only thing you can't do is to keep the schedule, the spec, and the cost the same and expect the project to improve.

CROSS-REFERENCE

For more on recalibration, see "Regularly assess progress and recalibrate or replan" in Using Miniature Milestones.

The problems associated with schedule slips are mainly confidence problems. If you sign up to perform to a best-case estimate, a slip means that you have failed or are on the brink of failing. You're not meeting your planned schedule, and there's no basis for knowing what your completion date really is.

But if you've described ahead of time how your estimate will mature as the project matures and given your estimates as ranges, there will be fewer slips. As long as you're in the range, you're doing fine. A "slip" becomes something that's completely out of the estimation range, and that becomes a rare occurrence.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.187.121