Measurement

Software projects can be measured in numerous ways. Here are two solid reasons to measure your process:

Measurement

For any project attribute, it's possible to measure that attribute in a way that's superior to not measuring it at all. The measurement might not be perfectly precise, it might be difficult to make, and it might need to be refined over time, but measurement will give you a handle on your software-development process that you don't have without it (Gilb 2004).

If data is to be used in a scientific experiment, it must be quantified. Can you imagine a scientist recommending a ban on a new food product because a group of white rats "just seemed to get sicker" than another group? That's absurd. You'd demand a quantified reason, like "Rats that ate the new food product were sick 3.7 more days per month than rats that didn't." To evaluate software-development methods, you must measure them. Statements like "This new method seems more productive" aren't good enough.

Be aware of measurement side effects. Measurement has a motivational effect. People pay attention to whatever is measured, assuming that it's used to evaluate them. Choose what you measure carefully. People tend to focus on work that's measured and to ignore work that isn't.

What gets measured, gets done.

Tom Peters

To argue against measurement is to argue that it's better not to know what's really happening on your project. When you measure an aspect of a project, you know something about it that you didn't know before. You can see whether the aspect gets bigger or smaller or stays the same. The measurement gives you a window into at least that aspect of your project. The window might be small and cloudy until you refine your measurements, but it will be better than no window at all. To argue against all measurements because some are inconclusive is to argue against windows because some happen to be cloudy.

You can measure virtually any aspect of the software-development process. Table 28-2 lists some measurements that other practitioners have found to be useful.

Table 28-2. Useful Software-Development Measurements

Size

Total lines of code written

Total comment lines

Total number of classes or routines

Total data declarations

Total blank lines

Defect Tracking

Severity of each defect

Location of each defect (class or routine)

Origin of each defect (requirements, design, construction, test)

Way in which each defect is corrected

Person responsible for each defect

Number of lines affected by each defect correction

Work hours spent correcting each defect

Average time required to find a defect

Average time required to fix a defect

Number of attempts made to correct each defect

Number of new errors resulting from defect correction

Productivity

Work-hours spent on the project

Work-hours spent on each class or routine

Number of times each class or routine changed

Dollars spent on project

Dollars spent per line of code

Dollars spent per defect

Overall Quality

Total number of defects

Number of defects in each class or routine

Average defects per thousand lines of code

Mean time between failures Compiler-detected errors

Maintainability

Number of public routines on each class

Number of parameters passed to each routine

Number of private routines and/or variables on each class

Number of local variables used by each routine

Number of routines called by each class or routine

Number of decision points in each routine

Control-flow complexity in each routine

Lines of code in each class or routine

Lines of comments in each class or routine

Number of data declarations in each class or routine

Number of blank lines in each class or routine

Number of gotos in each class or routine

Number of input or output statements in each class or routine

You can collect most of these measurements with software tools that are currently available. Discussions throughout the book indicate the reasons that each measurement is useful. At this time, most of the measurements aren't useful for making fine distinctions among programs, classes, and routines (Shepperd and Ince 1989). They're useful mainly for identifying routines that are "outliers"; abnormal measurements in a routine are a warning sign that you should reexamine that routine, checking for unusually low quality.

Don't start by collecting data on all possible measurements—you'll bury yourself in data so complex that you won't be able to figure out what any of it means. Start with a simple set of measurements, such as the number of defects, the number of workmonths, the total dollars, and the total lines of code. Standardize the measurements across your projects, and then refine them and add to them as your understanding of what you want to measure improves (Pietrasanta 1990).

Make sure you're collecting data for a reason. Set goals, determine the questions you need to ask to meet the goals, and then measure to answer the questions (Basili and Weiss 1984). Be sure that you ask for only as much information as is feasible to obtain, and keep in mind that data collection will always take a back seat to deadlines (Basili et al. 2002).

Additional Resources on Software Measurement

cc2e.com/2878

Here are addtional resources:

Oman, Paul and Shari Lawrence Pfleeger, eds. Applying Software Metrics. Los Alamitos, CA: IEEE Computer Society Press, 1996. This volume collects more than 25 key papers on software measurement under one cover.

Jones, Capers. Applied Software Measurement: Assuring Productivity and Quality, 2d ed. New York, NY: McGraw-Hill, 1997. Jones is a leader in software measurement, and his book is an accumulation of knowledge in this area. It provides the definitive theory and practice of current measurement techniques and describes problems with traditional measurements. It lays out a full program for collecting "function-point metrics." Jones has collected and analyzed a huge amount of quality and productivity data, and this book distills the results in one place—including a fascinating chapter on averages for U.S. software development.

Grady, Robert B. Practical Software Metrics for Project Management and Process Improvement. Englewood Cliffs, NJ: Prentice Hall PTR, 1992. Grady describes lessons learned from establishing a software-measurement program at Hewlett-Packard and tells you how to establish a software-measurement program in your organization.

Conte, S. D., H. E. Dunsmore, and V. Y. Shen. Software Engineering Metrics and Models. Menlo Park, CA: Benjamin/Cummings, 1986. This book catalogs current knowledge of software measurement circa 1986, including commonly used measurements, experimental techniques, and criteria for evaluating experimental results.

Basili, Victor R., et al. 2002. "Lessons learned from 25 years of process improvement: The Rise and Fall of the NASA Software Engineering Laboratory," Proceedings of the 24th International Conference on Software Engineering. Orlando, FL, 2002. This paper catalogs lessons learned by one of the world's most sophisticated software-development organizations. The lessons focus on measurement topics.

cc2e.com/2892

NASA Software Engineering Laboratory. Software Measurement Guidebook, June 1995, NASA-GB-001-94. This guidebook of about 100 pages is probably the best source of practical information on how to set up and run a measurement program. It can be downloaded from NASA's website.

cc2e.com/2899

Gilb, Tom. Competitive Engineering. Boston, MA: Addison-Wesley, 2004. This book presents a measurement-focused approach to defining requirements, evaluating designs, measuring quality, and, in general, managing projects. It can be downloaded from Gilb's website.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.18.101