134 ◾ Simple Statistical Methods for Software Engineering
worked on the one side; organizations showed Return on Investment (ROI). On
the other side, PSP metrics were not popular and were resisted by teams. ere were
two sides to PSP metrics, the good and the bad. People wanted simpler solutions
and pushed aside PSP metrics.
Automated metrics survived, which depend on tools, and people were asked just
to use data not to collect data. at by itself made a huge difference. e drawback
is that nobody is motivated to use data collected by tools. A seeming gain became a
loss. Such data did not connect well with people except in the form of reports and
business intelligence; the intended recipients were senior managers and customers.
Designers and testers did not care.
Simple core metrics were good and they worked. Fancy special metrics did
not. Metrics planned and metrics used were worlds apart. At least metric helped
in goal setting. Process control loop using metrics were too circuitous and oper-
ated with time lags that by the time reports came projects were finished. Even
the core metrics did not reach teams in time. e feedback loop disappeared on
the way. Care was taken to install a metric, but no one thought about human
communication.
Without communication links with people, metrics fail.
Metrics for Agile: Humanization
Agile projects have put people before processes and humanized metrics. Agile metrics
instantly reach people through displayed charts and daily meetings. Metrics them-
selves are kept light, user-friendly, and direct. Code complexity is measured as high,
medium, or low instead of using the sophisticated McCabe complexity number.
Software sizes come in story points; one does not have to count function points using
time-consuming rules by the International Function Point Users Group. More than
that, metrics are readily used. Graphical elements have become signifiers instead of
numerical indicators, where possible. e burn-down chart has replaced several met-
rics at once. Agile metrics influence day to day decision making.
The Price of Humanization
Agile metric data are mostly in the ordinal scale (like in “high–medium–low” judg-
ments). Ordinal data can be collected quickly without hassle. Ordinal data also have
a degree of calculated approximation. e contrast is between precise but less used
classic metric data versus approximate but humanized agile data. Working with a
lower scale suffers information loss, a price we pay for humanization. However, that
loss does not make agile metrics less scientific than classic metrics, if users are aware
of the degree of approximation.