Chapter 21. Visible Graphs

  • Anyone should be able to sense the state of the project by looking at a handful of graphs in the team's working area.

We are big fans of the scientific method. Emotional outbursts like, "I think you are going too slow," or, "This software is riddled with bugs," don't help when you're trying to steer a software project. How fast are we going? What variety of Swiss cheese does the software most closely resemble?

Deming says, "You can't manage what you can't measure." We're sure this isn't true, because lots of software gets shipped without any metrics being gathered. However, if you choose to manage without metrics, you are committing yourself to a degree of emotional detachment that is difficult to maintain, especially under pressure. The numbers help you look your fears in the face. Once you've acknowledged your fear you can use informed intuition to make your decision.

The danger with a "scientific" approach to planning is that the measurements can become the end instead of the means. The overhead of data collection can swamp real work. The process can become inhumane, with people—messy, smelly, distractable, inspired, unpredictable people—conveniently abstracted away to a set of numbers.

That's not what we are talking about. Don't do that.

Instead, here is a process that combines intuition and measurement:

  1. Smell a problem.

  2. Devise a measurement.

  3. Display the measurement.

  4. If the problem doesn't go away, return to 2.

Examples

Here are some examples from a real XP project that has been operating for ten iterations. The team devised an ingenious low-cost data-gathering technique. First, the team is small enough (five programmers) that team members don't need to distinguish between stories and tasks. Their stories are small, between four and twenty hours. On the back of each story card is a little table:

PairDateHours
____ __________________
____ __________________
____ __________________
____ __________________
____ __________________

At the end of every iteration, someone types in all the values on the backs of the cards onto a spreadsheet. The raw data can then be presented in many different forms.

Productivity

Team members began to get the feeling that they were going slower than they should. Rather than do something obviously ineffective like work longer hours, they decided to measure. The measure they chose was the percentage of office hours spent on programming.

It's obvious from Figure 21.1 that the team just wasn't spending many hours programming. No wonder it was going slowly. A little reflection showed that the hours dropped about the time the team

Measured Overhead

Figure 21.1. Measured Overhead

  • Split to start a second project, and

  • Acquired external customers

No wonder the team didn't have as much time for programming. After doing what they could to increase programming time, team members modified their release plan to reflect their new measured velocity.

Integration Hell (Well, "Heck" Anyway)

Another problem that began to smell was that integrations were taking too long and were the source of too many errors. The team started tracking how long they were programming together before integrating (see Figure 21.2).

Longer Sessions Slow Integration

Figure 21.2. Longer Sessions Slow Integration

The trend toward longer and longer programming sessions was obvious. The consequence was that people were exhausted when they finally integrated, increasing the probability of errors. Also, the delay before integrating increased the probability of conflicts with the changes from other pairs. Once the measurement was in place (on June 14), the average duration of a pairing session dropped to two hours and integration got easier.

This example illustrates an important principle of management by measurement: indirection. The team could have separately started tracking how long each integration took and tried to optimize integration. By finding the root cause of difficult integrations, team members were able to treat the cause, not just the symptoms.

Choosing Which Graphs to Show

Here are some graphs you may want to use. Choose your graphs carefully. Consider what things you, your management, your programmers, and the customers are concerned about. For each worry try to think of a simple graph that'll demonstrate what's happening to everyone present.

When a graph has done its job, drop it. Graphs are a powerful tool, but too many of them blunts their purpose. Everyone should know that the graphs count, and the chore of plotting them should be compensated by the warm feelings about the useful information you are gaining.

Many people suggest putting these graphs on a Web site. This is good if people on remote sites need to see what's happening. But don't let that be a substitute for putting them on the wall in the developers' area. Web sites don't get looked at if they aren't clicked on. You can't avoid what's on the bathroom wall. Many an insight comes when idly staring at a graph when you're half doing something else.

Here are a handful of graphs we've been glad we used:

  • Acceptance Tests Defined and Passing

  • Production Code Bulkv versus Test Code Bulk

  • Integrations

  • Bug Density

  • Story Progress

  • System Performance

The most important thing to remember is to select the graphs you need and stop producing graphs you don't. Although we'll be flattered if you pick the graphs we suggest, it's far more important that you think about your worries and choose graphs that illustrate your worries. Just trying to figure out what the graphs should be will probably do a lot to help you think through your issues. After all, we already get enough flattery.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.183.89