136 Simple Statistical Methods for Software Engineering
Software requirements are broken into small features called user stories. e
story point scale is used to judge the size of “user stories.
User stories are analogous to the classical use cases.
Story Points are analogous to Use Case Points.
Technical Debt
Number of defects discovered per iteration.
Tests
e number of tests that have been developed, executed, and passed to validate a
story.
Level of Automation
e percentage of tests automated.
Earned Business Value (EBV)
Business value attached to stories delivered. According to Dave Nicolette, “EBV
may be measured in terms of hard financial value based on the anticipated return
on investment (ROI) prorated to each feature or user story.
Burn-Down Chart
Burn down represents the remaining work of the project versus the remaining
human resources. is information can be presented every week until a complete
release. Burn-down chart is a famous agile visual used to track progress. Figure 9.1
shows an example of a burn-down chart.
A common practice is to plot a burn-down chart for every team and for each
iteration. is provides the necessary biofeedback to teams to control backlog and
to attain iteration goals in time.
Burn-Up Chart
Burn up represents work finished. Figure 9.2 illustrates a burn-up chart (BUC).
e y-axis is a cumulative plot of stories developed iteration by iteration. e
ideal performance line is plotted along the actual performance line to provide guid-
ance and to measure performance gaps. e BUC can be used when several stories
are developed concurrently. is is a chart of stories built and work done; this
records achievement and not merely activities.
Agile Metrics 137
Burn Up with Scope Line
A BUC with scope line marked above as shown in Figure 9.3 has an advantage.
Changing scope can be portrayed in this form and is not so easy in the other
two charts.
Backlog
Start
of
iteration
End
of
iteration
Ideal
Actual
Burn down
Figure 9.1 Burn-down chart.
Story
Iteration 1
Ideal
Burn up
Actual
Iteration 2
Iteration 3
Iteration 4
Iteration 5
Iteration 6
Iteration 7
Iteration 8
Iteration 9
Iteration 10
Iteration 11
Iteration 12
Iteration 13
Iteration 14
Iteration 15
Iteration 16
Figure 9.2 Burn-up chart.
138 Simple Statistical Methods for Software Engineering
Box 9.3 A ReBiRth
Many classic metrics are reborn into the agile world with new names.
Productivity used to be measured as KLOC/man month. is metric is called
velocity and measured as story points delivered per iteration. e basic con-
cept remains the same, but the metrics used in the calculation are now less
precise but more convenient. e expectations have shifted. Velocity metric is
not used as a target, and that seems to have made all the difference; the metric
has received social acceptance.
When a measure becomes a target it ceases to be a good
measure.
Goodharts Law
Good old defects are now called technical debt. For long, programmers
knew that there should be no stigma attached to software defects. e new
definition upholds a dignity and the human spirit.
Story
Iteration 1
Iteration 2
Iteration 3
Iteration 4
Iteration 5
Iteration 6
Iteration 7
Iteration 8
Iteration 9
Iteration 10
Iteration 11
Iteration 12
Iteration 13
Iteration 14
Iteration 15
Iteration 16
Scope
Figure 9.3 Burn-up chart with scope line.
Agile Metrics 139
Adding More Agile Metrics
e present metric system is a victim of poor implementation. It is overdesigned
but underutilized. Agile metrics are simple and easy to implement; they have to be
simple to honor the very spirit of agile methodology.
Hartmann and Dymond [1] list 10 attributes of a typical agile metric system
as follows:
1. It affirms and reinforces lean and agile principles.
2. It follows trends, not numbers. Measure “one level up” to ensure you measure
aggregated information, not suboptimized parts of a whole.
3. It belongs to a small set of metrics and diagnostics. A just enough” metrics
approach is recommended: too much information can obscure important trends.
4. It measures outcome, not output.
5. It is easy to collect.
6. It reveals rather than conceals.
7. It provides fuel for meaningful conversation.
8. It provides feedback on a frequent and regular basis.
9. It measures value.
10. It encourages “good enough” quality.
It may be noted that the above attributes can be applied to metrics in conven-
tional life cycles too.
Hartmann and Dymond conclude that the key agile metric should be business
value, and they note,
Agile methods encourage businesses to be accountable for
the investment in software development efforts. In the same
spirit, the key metrics we use should allow us to measure this
accountability. Metrics should help validate businesses that
make smart software investments and teams that deliver busi-
ness value quickly.
ROI begins with the first release of feature. Value must be measured. Delivering
value early is the hallmark of agile projects.
However, in extreme programming, project teams have added metrics that are
not so simple. Teams use the following metrics where refactoring takes place at the
end of each iteration source code:
1. Coupling
2. Cyclomatic complexity
140 Simple Statistical Methods for Software Engineering
An example of value generated by these metrics is available in a case study
by Martin Iliev [2]. Coupling metrics lead to “good encapsulation, high level of
abstraction, good opportunity for reuse, easy extensibility, low development costs
and low maintenance costs.Further, cyclomatic complexity metrics lead to low
maintenance costs, collective code ownership, easy to test and produce good code
coverage results.Martin Iliev has established a rm business case for these metrics.
From the above example, it may be seen that metrics are agile because of
the way they are used and the value they create and not because of their
internal characteristics.
In yet another case, Frank Maurer and Sebastien Martel [3] study productivity
in extreme programming in OO projects using the following four metrics:
1. LOC/effort
2. Methods/effort
3. Classes/effort
4. (Bugs + features)/effort
ey present evidence for improvement in productivity after introducing XP
using the four metric data, a fairly obvious use of agile metrics to find ROI of pro-
cess improvement. It may be noted that they have considered the metric productiv-
ity instead of velocity in this case study.
Case Study: Earned Value Management
in the Agile World
BUCs in agile projects remind us of the earned value graph (EVG) in conventional
projects. BUC and EVG look alike. e similarity runs deeper. Earned value man-
agement is widely accepted as a best practice in project management and is cov-
ered well in Project Management Body of Knowledge. Managing milestones makes a
manager agile in sharp contrast with one who chooses to manage at the task level.
ere are typically about eight milestones in a project, and all the project manager
had to do is to monitor earned value, planned value, and cost at every milestone
and connect the dots and plot EVG. As milestones pass by, the project manager is
able to predict future performance by seeing trends. A BUC does exactly that. We
use sprints instead of milestones. Value is measured in terms of finished and tested
stories.
e implementation of EVM in agile projects is explained by John Rusk [4],
who observes, “Agile and EVM are a natural fit for each other.
Anthony Cabri and Mike Griffith [5] explore EVM usage in agile projects, cre-
ate examples of BUCs, and tackle the issue of changing scope with EVM.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.251.248