Managing the Risks of Measurement

In general, Measurement is an effective risk-reduction practice. The more you measure, the fewer places there are for risks to hide. Measurement, however, has risks of its own. Here are a few specific problems to watch for.

Overoptimization of single-factor measurements. What you measure gets optimized, and that means you need to be careful when you define what to measure. If you measure only lines of code produced, some developers will alter their coding style to be more verbose. Some will completely forget about code quality and focus only on quantity. If you measure only defects, you might find that development speed drops through the floor.

It's risky to try to use too many measurements when you're setting up a new measurement program, but it's also risky not to measure enough of the project's key characteristics. Be sure to set up enough different measurements that the team doesn't overoptimize for just one.

image with no caption

Measurements misused for employee evaluations. Measurement can be a loaded subject. Many people have had bad experiences with measurement in SAT scores, school grades, work performance evaluations, and so on. A tempting mistake to make with a software-measurement program is to use it to evaluate specific people. A successful measurement program depends on the buy-in of the people whose work is being measured, and it's important that a measurement program track projects, not specific people.

Perry, Staudenmayer, and Votta set up a software research project that illustrated exemplary use of measurement data. They entered all data under an ID code known only to them. They gave each person being measured a "bill of rights," including the right to temporarily discontinue being measured at any time, to withdraw from the measurement program entirely, to examine the measurement data, and to ask the measurement group not to record something. They reported that not one of their research subjects exercised these rights, but it made their subjects more comfortable knowing they were there (Perry, Staudenmayer, and Votta 1994).

image with no caption

For an excellent discussion of problems with lines-of-code measurements, see Programming Productivity (Jones 1986a).

Misleading information from lines-of-code measurements. Most measurement programs will measure code size in lines of code, and there are some anomalies with that measurement. Here are some of them:

  • Productivity measurements based on lines of code can make high-level languages look less productive than they are. High-level languages implement more functionality per line of code than low-level languages. A developer might write fewer lines of code per month in a high-level language and still accomplish far more than would be possible with more lines of code in a low-level language.

  • Quality measurements based on lines of code can make high-level languages look as if they promote lower quality than they do. Suppose you have two equivalent applications with the same number of defects, one written in a high-level language and one in a low-level language. To the end-user, the applications will appear to have exactly the same quality levels. But the one written in the low-level language will have fewer defects per line of code simply because the lower-level language requires more code to implement the same functionality. The fact that one application has fewer defects per line of code creates a misleading impression about the applications' quality levels.

To avoid such problems, beware of anomalies in comparing metrics across different programming languages. Smarter, quicker ways of doing things may result in less code. Also consider using function points for some measurements. They provide a universal language that is better suited for some kinds of productivity and quality measurements.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.83.62