3

VALUE

The mind is the laboratory where products, both fake and genuine are manufactured. People grow wild weeds, others grow flourishing flowers!

—Israelmore Ayivor

QUIZ

To set the stage for this chapter, try answering each of the following statements with Agree or Disagree. Answers appear at the end of the chapter.

Statement

Agree

Disagree

There is no value until a product is released.

For for-profit organizations, value is ultimately represented in terms of money (revenue and cost).

Attaching incentives to value metrics improves performance and morale.

Metrics can help validate business hypotheses and the impact of releases.

Velocity is a good measure of value delivered.

A release can produce negative value.

VALUE DEFINED

When was the last time a new product or service brought a smile to your face? Would you consider that product or service to be valuable? Absolutely. But what exactly is value?

It depends on the context.

As people, anything we consider to be of value ultimately comes down to one elusive element: Happiness. Is money valuable? Only if it makes you happy. More time with your family? Only if that makes you happy (not all families are worth being around). Time, money, good job; these are only circumstantial evidence of value. We assume having more of them will make us happy.

Value is not just about people. Organizations strive for value too.

As companies, anything they consider to be of value ultimately comes down to one elusive element: money. Are happy customers valuable? Only if they pay for your product. Is a good culture valuable? Only if it reduces the cost of attracting and retaining employees. Happy customers, good culture, streamlined processes; these are only circumstantial evidence of value. We assume having more of them will make (and save) us more money.

An obvious exception is nonprofit organizations like charities and government where value is about improving society. For nonprofits, money is circumstantial. In fact, increasing revenue while not positively impacting society would be considered negative value.

Why are these definitions of value important for a Product Owner? Generating money (or improving society) is the reason you are in business and it maps to what this book refers to as the producer benefit. On the other hand, happiness is what your customers care about and is mapped to your value propositions.

Both are relevant and connected. You cannot make money without having a customer who appreciates your product or service.

As a Product Owner, you need to understand your business and your customers.

Stephen Denning describes this well with his “delighting clients” thinking.1 The customer always comes first; money follows. In the end, it is about money, but do not let the money blind your actions. Do not let the money betray your customer loyalty or your desire to make a customer smile. In the 21st century, the customer is king.

DELIVERING VALUE

When can you actually deliver value?

Well, there is only one possible answer to that question. Release!

Each release is an opportunity to create value. Everything leading up to a release is an investment: inventory, bound capital, money you cannot use for other important initiatives.

How often does your company release? Many companies release about twice a year. That schedule represents a lot of investment before seeing a penny in return.

All of the earlier mentioned activities of product management lead to value only through a release, when you put your product into the hands of customers.

Figure 3-1 Release is the funnel toward value

Think about a release as a funnel through which you squeeze all your product development activities (see Figure 3-1). The time this release-funnel consumes is crucial: it’s your lead time.

If you ever find yourself in a debate between waterfall and agile, there is only one point you need to make to upper management: When do you get value to your customer?

Figure 3-2 Traditional waterfall approach delays value.

Release equates to value. With the waterfall approach, when do you release? By definition, it is the last step (see Figure 3-2). Therefore, by design, waterfall delays value.

By contrast, a Scrum Team produces potentially releasable increments of the most valuable items every 30 days or less (see Figure 3-3).

Admittedly, releasing is easier said than done. Because of this, many organizations delay all the activities needed to release on a consistent basis. You should recognize that this investment in getting value out to your customer sooner is worthwhile. Using Scrum makes this more evident.

Figure 3-3 Delivering value: Scrum versus waterfall

Figure 3-4 shows the potential value accumulation in Scrum versus the waterfall approach. However, until you release, you are adding no value, only costs. This is often correlated with the financial term Cost of Delay, which demonstrates how value leaks away over time.

Figure 3-4 Accumulating value: Scrum versus waterfall

Each Sprint, you are spending tens of thousands of dollars. If you wait six months before releasing, you could be looking at hundreds of thousands (or even millions) without a single penny of return on that investment. Each release should be looked at as a return on investment. If that return is not what was expected, or if better investment opportunities show up, pivot (see Figure 3-5).

Figure 3-5 Learning from releases and pivoting

This is the true benefit of using a framework such as Scrum. It is not a project management tool. Rather, it creates a competitive advantage for organizations by providing them the ability to test out their hypotheses more often. You will explore this concept further in the next chapter.

VALUE METRICS

How do you know you are creating value? For a Product Owner to properly adapt a product, she needs some empirical evidence, something to inspect. That is where the right metrics come in.

Let’s talk pizza.

Imagine you work at a growing pizza chain. You are part of their delivery organization. You and your colleagues are responsible for getting pizzas out the door to hungry customers. You have managers, drivers, phone operators, vendors, and other necessary personnel.

How do you know whether you are being successful? What metrics will you measure your progress against?

Do they look something like these?

Pizzas delivered per trip

Time taking an order

Time for delivery

Distance per delivery

Fuel costs

Order accuracy

Customer complaints

Orders per driver

Incidents (accidents, tickets)

Route efficiency

A perfectly reasonable set of metrics, right? If you care about improving your department, you should definitely track them.

Now, switch roles. You are owner/partner of the pizza chain. Come up with another list of metrics that matter to you and your partners.

Do they look like these?

Revenue

Investment

Operating costs

Profit

Customer satisfaction

Employee satisfaction

Repeat customers

Time to market

Growth

Market share

Market drivers (trends, ingredients, events)

Are your two lists very different? Yes, mostly they are, with a few exceptions (e.g., customer satisfaction, costs). Why is that?

Consider three things:

1. Efficiency

The delivery organization now has a set of shiny metrics it can work toward. The practices and processes put in place will focus on moving these numbers. The assumption is that by improving these intermediate metrics, the business will benefit:

“Hey look, we spent thousands implementing these new route efficiency algorithms so that we can save 60 seconds on each delivery.” You may reasonably assume that this will be good for the business, but it’s no guarantee and may even become more of a distraction than a benefit. Do you even know that customers want quicker pizza delivery? Is the ROI there? Popular practices are great, but they are not the end goal, and you cannot lose sight of the business’s true needs. Otherwise you end up with a Cargo Cult mentality, which “slavishly imitates the working methods of more successful development organizations.”2

2. Vision

The more your people know and understand the true vision and goals of the organization and the product, the better decisions they will make. Like it or not, assumptions and decisions are often made independently of one another. Wrong assumptions lead to bad decisions. You can minimize the negative impact of these decisions by educating your people on the true organizational drivers.

In 1993, Domino’s Pizza replaced its 30-minute or free guarantee with a promise that their customers will be satisfied with their product, making it a quality guarantee rather than a time-dependent one. “If for any reason you are dissatisfied with your Domino’s Pizza dining experience, we will remake your pizza or refund your money” is now the company’s Satisfaction Guarantee.3 This is an example of focusing instead on true valuable metrics (customer satisfaction) rather than delivery metrics (time for delivery), which empowers employees to make decisions based on the company vision.

3. Incentive

Your pizza shop has quarterly bonuses to hand out. What would you base them on? The delivery metrics or the owner metrics? Which ones have a greater potential for corruption? Any metric can be gamed, but you will notice that the more intermediate circumstantial metrics in the delivery department have bigger opportunity for abuse. This doesn’t just create unintended behaviors; it also reduces the transparency and therefore the usefulness of these metrics.

A few years ago, my wife and I went to dinner at a chain restaurant. Before finishing our mediocre meal, we were presented with a customer satisfaction survey and told that if we gave all “excellent” scores, we would receive a free appetizer for our next visit. This was obviously not the way the business intended to use these surveys. (Of course, we gave them all “excellents.”)

So, what does this all mean?

The delivery metrics aren’t without worth. They are very helpful, even essential, for guiding the more operational practices. But you run into problems when they are used as false representations of value and set as achievement goals. If the business doesn’t establish the goals along with more direct value metrics, then the delivery organization will have no choice but to offer up its own.

Let’s get back to software.

Can you now correlate these metrics with ones used when developing software products? Which metrics are used by software delivery? Which ones by the business? Table 3-1 provides some examples.

Table 3-1 Comparison of Delivery and Owner Metrics

Delivery Metrics

Owner Metrics

Velocity

Revenue

Number of Tests

Costs

Code Coverage

Customer Satisfaction

Defects

Employee Satisfaction

Coupling and Cohesion

Lead & Cycle Time*

Code Complexity

Innovation Rate* (percentage of new vs. maintenance work)

Build Failures

Usage*

Process Adherence

Lines of Code

*This metric is defined later in the chapter.

So how does your organization measure value? Does it spend too much time on metrics in Table 3-1’s left column? Is it missing opportunities to instead apply metrics that reflect true business outcomes?

Delivery metrics are still important, but they must be considered value-neutral. Otherwise they lose their importance as feedback mechanisms for the delivery organization. Compare them to an airplane cockpit, filled with dials and information radiators about altitude, engine temperature, oil levels, outside temperature, and so on. All that important information is for the pilots only so they can successfully complete their mission. This information, however, does not reflect the value delivered. Customers and the airline organization measure value by being on time, safety, and fuel costs. Pilots should be held accountable for those final outcome metrics, not the cockpit metrics. In the same way, software teams should be held accountable for business metrics, not circumstantial metrics like velocity, test coverage, and process adherence.

This disconnect between the delivery organization and the business (the Product Management Vacuum, discussed in Chapter 1) prevails in the software industry. Somewhere along the line, the true vision behind the products gets lost.

Challenging managers, Product Owners, and Development Teams to identify and promote metrics related to actual outcomes is key to bridging that gap. This can promote true agility beyond IT departments and give organizations a true competitive advantage.

This concept of using direct rather than circumstantial evidence is known as Evidence-Based Management (EBMgt). It is gaining traction in the technology field and has industry leaders like Ken Schwaber advocating for it. And it is the topic of our next section.

EVIDENCE-BASED MANAGEMENT

Humans have practiced medicine for about 2,900 years. For 2,800 of those years, snake oil and other nonsensical treatments predominated. People drank mercury because they thought it must have magical properties.

About a hundred years ago, the medical field started to require evidence that a medication or a procedure improves the outcome. Evidence-based medicine was born.

I once went to a dermatologist for a red mark on my cheek. After examining it, the doctor said she needed to order a biopsy. When I asked if she knew what the mark was, she replied with confidence, “Oh that’s basal cell carcinoma. I’ve seen it hundreds of times.” Confused, I asked why we couldn’t just take it out. Why do a biopsy that would add recovery time and cost? She responded, “That’s just not how it’s done. We need the evidence first.”

Since these principles have helped prove assumptions and therefore improved overall health, what if the same principles were applied to the domain of management?

What is Evidence-Based Management (EBMgt)?

Over the past two decades, most organizations have significantly increased the value gained from software by adopting the Scrum framework and agile principles. Evidence-Based Management practices promise to deliver even further gain.

But how can you, as an IT leader, make the biggest impact on your organization? You manage investments based on ROI and value. You know that frequent inspection of results will limit the risk of disruption. You influence the organization to create a culture that allows it to take advantage of opportunities before your competitors do. By following EBMgt practices, you can put the right measures in place to invest in the right places, make smarter decisions, and reduce risk.

Figure 3-6 shows a series of EBMgt metrics that act as evidence for the value produced from products. These value metrics are organized into three Key Value Areas (KVA): Current Value, Time to Market, and Ability to Innovate. Within each KVA are Key Value Measures (KVMs), also shown in Figure 3-6.

Figure 3-6 Value metrics based on EBMgt (Figure adapted from Evidence-Based Management for Software Organizations [EBMgt], http://www.ebmgt.org/)

Before getting into the descriptions of each metric, there is an important distinction to make between two types of metrics: leading and lagging indicators.

Lagging indicators are typically “output” oriented, easy to measure but hard to improve or influence. These are much more meaningful to the business because they represent the reason for having a product. Leading indicators are typically “input” oriented over which you have more influence because you understand the practices that drive them.

Another way to look at them is that leading indicators are pre-product and lagging indicators are post-product.

A common example used to illustrate this distinction is trying to lose weight. The ultimate goal is to lower the weight measurement. Weight is a lagging indicator, easy to measure—just step on a scale—but harder to directly influence (unless you cut off a limb). How much you eat and exercise are leading indicators. It certainly is not easy, but you have more influence over these leading indicators. The hope is that as you positively influence calories taken in and calories burned, eventually your lagging weight indicator is affected (hopefully in the right direction).

In product development, measuring both is just as important. As practitioners, you need to influence your current practices to eventually see changes in your business objectives.

CURRENT VALUE

From the EBMgt Guide

Current Value reveals the organization’s actual value in the marketplace. Current Value establishes the organization’s current context, but has no relevance on an organization’s ability to sustain value in the future.4

Notice the emphasis on the organization. While measuring value at the product level is important, understanding how it compares to the overall organization’s current value provides you with a more holistic view.

The following, mostly lagging, metrics represent the ultimate benefit for the producer of the product.

Revenue per Employee

Gross revenue divided by the number of employees

Revenue by itself is an important metric. It is the benchmark by which organizations are compared. Products are no different, and it should be measured unless your product vision is strictly about cost savings.

Measuring revenue by employee gives a little more context. This measure is interesting to observe when your product or organization is going through a growth phase. Often growth and revenue per employee do not increase linearly; once a company doubles in size, the revenue per employee usually drops. Normally this is attributed to more hierarchies and longer chains of command—the cost of scale.

Product Cost Ratio

All expenses in the organization that develop, sustain, provide services, market, sell, and administer the product or system.

There are two ways to look at cost:

The investment in product development—a leading metric that can be measured each Sprint. The biggest cost here is likely the salaries of the Development Team.

The better you understand this metric, the easier it is to measure your return on investment.

The cost of running the product in production—a lagging metric that can include everything from the cost of the production servers and training users to salaries of internal users and support staff.

The better you understand this metric, the easier it is to measure your total cost of ownership.

I was working with an organization in the Minneapolis area to gather metrics around cost. When I asked about Development Team costs, the stakeholders in the room responded that they had no access to salary information. Rather than remove or delay that metric, we decided instead to just search the web for average IT salaries in the greater Twin Cities area. We multiplied that number by the number of team members, which led to an estimated cost-per-Sprint metric. The important point here is that we want people thinking about costs, even if the number is not 100 percent accurate. We also want to just get started (often the hardest part) so that we have some sort of a baseline. If anyone who has the real information sees our metric and raises an issue, then they can provide us with the more accurate data. As with many solutions in the agile world, get started with what you have and refine as you go. Nothing will be perfect at the beginning.

Employee Satisfaction

Engaged employees that know how to maintain, sustain, and enhance the software systems and products are one of the most significant assets of an organization.

According to recent studies, more than 50 percent of the U.S. workforce are not engaged and close to 20 percent are actively disengaged.5 Often, the only reason they hang in is for health insurance.

Moving away from extrinsic rewards (carrots and sticks) to more intrinsic rewards is key. Dan Pink describes intrinsic motivation in detail in his book Drive: The Surprising Truth about What Motivates Us.6 He asserts that intrinsic motivation comes down to three elements:

Autonomy: the desire to be self-directed

Mastery: the itch to keep improving at something that’s important to us

Purpose: the sense that what we do produces something transcendent or serves something meaningful beyond ourselves

In Chapter 6, you will explore how Scrum injects all three of these motivating elements into its framework.

Jurgen Appelo also provides interesting thoughts and ideas about motivation with his Management 3.0 approach7—ideas and practices for fostering employee engagement. Going into the details here, however, are out of the context of this book.

I find that the Sprint Retrospective is a great time to measure the employee satisfaction of the Development Team.  A simple consistent question to conclude the Retrospective creates a useful “happiness metric” that will allow you to spot trends over time. Consider asking something like “On a scale from ‘5 - I am super happy!’ to ‘1 - I never want to experience that again,’ how did you feel about that last Sprint?”

I like to hand out feedback cards (see Figure 3-7) to developers at the end of each Sprint. In this questionnaire I ask the developers on the Development Team about their happiness in various areas. The “Product Owner” and “Scrum Master” happiness questions are set, while the other questions depend on the context. This enables me to discover trends and take action early on.

Figure 3-7 Happiness index

Customer Satisfaction

Meeting or surpassing your customer’s expectation

The purpose of business is to create and keep a customer.

—Peter Drucker

This is a lagging metric for how satisfied your customers are with your product and the services that support it.

There are many ways to measure customer satisfaction, and you may even be able to leverage existing measurements from your organization.

A common industry recognized way of measuring customer satisfaction is the Net Promoter Score (NPS). NPS measures customer experience and predicts business growth. This proven metric transformed the business world and now provides the core measurement for customer experience management programs around the world.8

Calculate your NPS using the answer to a key question: “Using a 0–10 scale: How likely is it that you would recommend [product] to a friend or colleague?”

Respondents are grouped as follows:

Promoters (score 9–10) are loyal enthusiasts who will keep buying and refer others, fueling growth.

Passives (score 7–8) are satisfied but unenthusiastic customers who are vulnerable to competitive offerings.

Detractors (score 0–6) are unhappy customers who can damage your brand and impede growth through negative word of mouth.

Subtracting the percentage of Detractors from the percentage of Promoters yields the Net Promoter Score, which can range from a low of -100 (if every customer is a Detractor) to a high of 100 (if every customer is a Promoter).

Consider building a feedback mechanism for customer satisfaction right into your product. Microsoft Office made this a standard feature in its toolset in 2016 (see Figure 3-8).

Figure 3-8 Direct feedback from a Microsoft Office product

TIME TO MARKET

From the EBMgt Guide

Time-to-Market evaluates the software organization’s ability to actually deliver new features, functions, services, and products. Without actively managing Time-to-Market, the ability to sustainably deliver value in the future is unknown.9

Release Frequency

The time needed to satisfy the customer with new, competitive products

As it was already established, the only way to bring value to your customer is to release. But how often are you getting value out to your customer? Are you talking hours, weeks, months, or even years? Many of the larger companies choose to release around existing schedules, such as fiscal quarters, without much thought (or with complete ignorance) toward the marketplace and their customers’ needs. A truly agile company should have the desire and the capability to release as frequently as necessary to stay relevant in an increasingly unpredictable marketplace. The right measurements here are crucial.

Consider using a rolling window time period to count how many production releases were made. Let’s pretend that in the first three-month period, you made two releases. One year later, you steadily raised that releases-over-time number to 12 (see Figure 3-9).

Figure 3-9 Release frequency chart

Tracking release frequency trends in this way demonstrates time-to-market agility in a much clearer way than velocity or scope ever could.

Release Stabilization

The impact of poor development practices and underlying design and code base. Stabilization is a drag on competition that grows with time.

After feature development stops, often called feature freeze, how long does it take to release the software? This period is called release stabilization, a time frame in which you prepare the Increment for release with activities such as regression testing, deployment, user acceptance, documentation, and bug fixing.

It is important to note that the idea of a “stabilization” period goes against the very foundation of continuous value delivery. Stabilizing is more about fixing something that is broken, more about stability—number of crashes, defects, data integrity—than about overall product quality and user experience. In Scrum, it is expected that the Increment is “stable” (Done) every Sprint.

Release frequency is directly affected by the time it takes to stabilize a release. In other words, the time between releases (lead time) is always constrained by the time required to stabilize a release (see Figure 3-10).

Figure 3-10 Impact of long release stabilization

The closer to zero your release stabilization period is, the better.

Cycle Time

The time (including stabilization) to satisfy a key set of customers or to respond to a market opportunity competitively.

Cycle time for a feature starts when development on it begins. It ends when the feature is ready for production (see Figure 3-11).

Figure 3-11 Cycle time

Contrast this to lead time, which is the time from when a customer asks for a feature to when they actually receive it.

The more you consider cycle time to be synonymous with lead time, the more agile you will be. Whenever you are “Done” with the development of a feature, you should make a release. Any delay beyond that is considered waste.

On-Product Index

The time developers are allowed to work on exactly one initiative like a product. The more developers task-switch between competing work, the less they are committed and the more delays are introduced.

Figure 3-12 The price of task switching

Gerald Weinberg asserts that for each additional project someone undertakes, he lose up to 20 percent of his time to the act of context switching.10 For example, as Figure 3-12 illustrates, working on four simultaneous projects could have you spending upwards of 60 percent of your time just juggling the different work (as opposed to doing it).

Assume you work on a major waterfall project to be released 15 months from now. Halfway through your design phase, an important initiative arises that, in your opinion, is urgent enough to be addressed right now.

Figure 3-13 Parallel versus serial on-product index

Because there is no other team available, your team is tasked to work on both projects in parallel. However, because of the cost of task switching, a 20 percent loss decreases your capability to make progress. So, it is not 50 percent per project but only 40 percent (100% − 20% / 2 = 40%). Assuming the amount you plan is the same as in the first project, it would take more than twice as long to complete. Also, the remaining design effort will take 20 percent longer. This drags on until the first project releases a major release. After that release, the second project can speed up again if the on-product index is now back to 100 percent (see Figure 3-13).

The end result of all this context switching is that it directly affects time-to-market as well as employee satisfaction.

Today many companies consider their employees as resources, commodities like electricity and raw material. That thinking created the idea that humans are replaceable like batteries. “Let’s replace a Duracell with an Energizer,” or “Let’s replace this programmer with that programmer. Both are programmers, what’s the difference?” Plenty: experience from former projects, technologies used, tools used, social skills . . . the list is long.

Have you ever heard of the term “Full-Time-Equivalent (FTE)”? Two 50 percent resources make one FTE, something that is supposedly equivalent to one human being.

I have seen up to four people adding up to one FTE. What nonsense!

Frederick Brooks uses a good counterexample when describing his law (Brooks’s Law): “While it takes one woman nine months to make one baby, nine women can’t make a baby in one month.”

ABILITY TO INNOVATE

From the EBMgt Guide

The Ability to Innovate is necessary but often a luxury. Most software is overloaded by non-valuable features. As low-value features accumulate, more of the organization’s budget and time is consumed maintaining the product, reducing its available capacity to innovate.11

Organizations without strength in all three KVAs may have short-term value but will not be able to sustain it. The Current Value of any organization must be accompanied by evidence of its ability to meet market demand with timely delivery (time-to-market) while being able to sustain itself over time (Ability to Innovate).

Below is a series of metrics that together provide a way to measure potential innovation.

Installed Version Index

The distribution of customers across the installed versions in production. The maintenance of the older versions has a negative impact on Ability to Innovate.

How many of your customers are using your latest product version? This percentage is a direct reflection of the value you are delivering each release. For example, if 70 percent of users are on the latest release, then only 70 percent of them are getting any value from your most recent features.

A high absorption cost reduces the willingness of customers to upgrade. The absorption costs can be

time of installation;

new hardware;

training;

data migration; and

pilots.

Tracking this metric can lead to more of an investment into reducing the absorption cost. Not having to support older versions of your product allows for more investment into new innovative features.

Usage Index

Determines how a product and its features are difficult to use and whether excess software is being sustained even though it is rarely used

How are your customers using your product? Do you know which features are used the most/least? Are they being used the way you predicted? All of this is important information when deciding what to build next.

Figure 3-14 Visualization of usage index

The feature at the bottom right of Figure 3-14 could be an admin feature that only a few dedicated personnel use. It might also be the new “killer” feature that you assume is going to have a huge positive impact. In either case, it warrants some attention. Its position on the graph could mean the feature is great but not easy to discover. This may be an opportunity to inspect and adapt your user interface or coordinate user training.

The Chaos Standish report, which is often cited, was revisited in 2014, and the findings were comparable to those in the original 2002 report. As Figure 3-15 shows, only 20 percent of product functionality is “often” used while 50 percent is “hardly ever” used.

Figure 3-15 Usage of features and function by Standish group

Innovation Rate

Growth of technical debt caused by poorly designed and developed software. Budget is progressively consumed as the old software is kept alive.

How costly is it to keep your product up and running? How much of your development budget is spent on maintenance and support? The healthier your code base is, the more test automation you have, the more capacity you have for innovative features. This extra bandwidth gives you more ability to respond to the market trends and outpace your competition.

This is essentially the definition of technical debt. If you fail to address it now, you will end up paying more interest in the future. You eventually might find yourself in technical bankruptcy, resulting in a complete system rewrite: “Let’s just file Chapter 11 and start over.”

Netscape’s project to improve HTML layout in Navigator 4 has been cited as an example of a failed rewrite.12 The consensus was that the old Netscape code base was so bad, that the developers simply couldn’t work in it anymore; hence Navigator itself was rewritten around a new engine, breaking many existing features and delaying release by several months. Meanwhile, Microsoft focused on incremental improvements to Internet Explorer and did not face the same obstacles.13

Some examples of technical debt include the following:

Lack of automated

Build

Unit tests

Acceptance tests

Regression tests

Deployment

Code quality

Highly coupled code

High code complexity

Business Logic in the wrong places

High cyclomatic complexity (McCabe-Metric)

Duplicated code or modules

Unreadable names or algorithms

A Forrester research14 study showed that in 2010, less than 30 percent of IT budgets were spent on new features versus maintenance and expansion (see Figure 3-16).

Figure 3-16 Industry average distribution of IT budgets

Having a similar metric for your product or organization can be an important way to create transparency around innovation. This can justify investment into quality and automation practices.

Here are three ways to measure innovation rate:

1. Count the number of Product Backlog items that are new features versus the planned maintenance items that are about technical debt, bugs, and upgrades.

2. Some organizations have dedicated maintenance teams or individuals. Measure the ratio of maintenance people versus new product development people.

3. When the maintenance work is unpredictable, measure the time spent on unplanned maintenance items each Sprint.

Any of these will give you an innovation ratio that can be monitored over time.

I have worked with multiple organizations whose existing accounting structures made it impossible to correlate the long-term maintenance costs to new product development. This caused a huge deficiency in transparency as it was not an accurate measure of the total product costs. The consequence was a lack of quality, which left little room for innovation.

Defects

Measures increasingly poor-quality software, leading to greater resources and budget to maintain it and potential loss of customers

This is likely the most common metric used in software development. Although not the best indication of value, it is still a good practice to create a cadence of tracking the number of defects. The importance of this metric is more about the trend over time rather than the actual number. Not all defects are worth fixing. However, noticing that the number of defects is increasing over time can be an indication that the quality of the system is decreasing, and therefore you may have less time to devote to innovation.

TRACKING METRICS

An important aspect of putting value metrics in place is establishing the discipline to remeasure over time. The trends that emerge are as important as the data itself. These trends provide you with the information necessary to adapt your products and processes in a timely manner.

Radar graphs like the one in Figure 3-17 provide an interesting way of visualizing progress over time.

Figure 3-17 Spider graph showing metric changes over time

A more detailed way to organize your metrics uses a “scoreboard” style spreadsheet such as the one in Figure 3-18. This approach draws clear separation between the more circumstantial progress metrics (top), the leading value metrics (left), and the lagging metrics (right).

I like to make a scoreboard like this visible in a common area. Such a placement keeps the vision and the value of the product front and center with Development Teams and stakeholders.

Figure 3-18 Representation of metrics in a scoreboard-style spreadsheet (done with Microsoft Excel)

WHERE YOUR MONEY GOES

Combining the four KVMs is a powerful way to visualize where your money is not going. The example shown in Figure 3-19 is a visualization using industry averages.

Investing $1 leads to only $0.06 in ROI.

Figure 3-19 How much of your money goes toward value

Innovation Rate of 29 percent, reflecting the Forrester study where 53 percent is spent on maintenance and support on top of 18 percent for expanding capacity

On-Product Index of 80 percent, showing a situation where a team is working on two initiatives in parallel, causing a 20 percent task-switching loss

Usage Index, reflecting the Standish Chaos study where 35 percent of the features are used often or frequently

Installed Version Index, showing a situation where 70 percent of the users are on the latest release

Keeping track of where your money is going in a format similar to this provides great insight that can justify an investment in more strategic initiatives around innovation, maintenance, team structure, automation, and user guidance.

NEGATIVE VALUE

It is often assumed that value is always positive, but value can also be negative. Perceived negative value has a far stronger impact than positive value. According to several studies, people are between three to seven times more likely to share negative experiences than positive ones.

I have a nice diesel Volkswagen. I’ve always bought Volkswagens; my mother worked at a Volkswagen dealer. After Dieselgate15 I’m done. No more Volkswagens—ever!

Negative value can be either visible or invisible.

VISIBLE

Negative value can be in the form of new bugs, rendering an important feature no longer usable; too much system down time, decreasing performance; or clunkier user interface. These all create negative value, which the customer directly experiences.

Sometimes the cost of a bug-free release outweighs its value. Consider the cost of user training, environment validation (like a forensic lab), and regulatory audits.

I worked with a company that designed systems for manufacturing plants across the United States. Its users were not computer proficient and needed training across the country every time they updated the software. In this case, the absorption costs outweighed the value of the release, so the company instead decided to make monthly releases to a single manufacturing plant (beta) where they could collect feedback to inspect and adapt the direction of the product. They released nationwide every six months.

INVISIBLE

The other form of negative value is internal—not visible to the customer. One example is a new feature that nobody uses. It was implemented, tested, documented without generating any value. Even worse, from now on you must maintain this feature. This uses money you no longer have for innovation. Rushing a Development Team forces them to cut corners and sacrifice technical quality—and thus to fall short of the high standards you expect. The fix is quick and dirty, and the result is technical debt. You know the saying about quick and dirty? The dirty stays, while the quick is long gone.

The 2×2 matrix in Table 3-2 adapted from Philippe Kruchten16 is a good way to look at it.

Table 3-2 Value 2×2

Visible

Invisible

New features

Architecture

Added functionality

Infrastructure

Positive Value

Design

Automation (CI, CD)

(Technical debt only temporary)

Defects/bugs

Technical debt

System down times

Not used features

Negative Value

Performance

Cost of deployment

User experience

Cost of training

Consider the following statement:

Technical debt is not, not “Done.”

What does this mean exactly? In some occasions it could be the right business decision to create technical debt—for example, being first to market, creating a quick prototype, or reacting to an unexpected event.

You might be “Done” yet still have accumulated technical debt.

If you do not address technical debt right away, just understand that you will have to pay it back eventually—with interest. It may not be as visible as the interest rate on your credit card account, but you pay for it nevertheless. Think back to the innovation rate: Bad technical software quality will slow you down as simple changes take longer and require more effort than they should.

Having a solid definition of “Done” can help minimize the amount of technical debt produced. For any existing technical debt, ensure that you have a plan to repay it before the interest payments get out of control. Keep in mind that the time needed to pay off the debt will result in less visible value delivered each Sprint. Just like with financial debt, you will have less to spend on other things until your debt is paid off.

VALUE NEUTRALITY

This chapter has introduced you to a lot of interesting metrics. Their number one purpose is to provide data to generate information, so that you can make better decisions in the uncertain world of product development. Keeping these metrics free from influence and judgment is key. There is no bad or good information; there is only the current reality. This is what is meant by value neutrality.

Not having truly value-neutral metrics can cause unintended consequences and mask transparency. This is known as the Perversion of Metrics.

PERVERSION OF METRICS

I was taught that you cannot manage what you cannot measure. I still believe that this is right. On the other hand, I also strongly believe that what you measure drives the behavior of the people involved.

In the 1990s in Europe, far too much milk was produced because it was highly subsidized. The commonly used terms in the news were “milk sea” and “mountain of butter.” This drove the price so far down that milk was transformed into a more storable form like butter and milk powder. But even those actions eventually reached their limit. Finally, the European Union decided that the number of milk-producing cows needed to be reduced. As proof that they were following through on this plan, farmers were asked to mail in an ear of each slaughtered cow. Once the ear was received, a financial money reward was sent to the farmer. The first mistake was that the EU did not ask for a specific ear, left or right. The second mistake was that cows are able to live without ears. At some point, reporters discovered whole fields with earless cows.17

The cobra effect18 occurs when an attempted solution makes a problem worse—an instance of unintended consequences. The term stems from an anecdote set at the time of British rule of colonial India. The British government was concerned about the number of venomous cobra snakes in Delhi. The government therefore offered a bounty for every dead cobra. Initially this was a successful strategy as large numbers of snakes were killed for the reward. Eventually, however, enterprising people began to breed cobras for the income. When the government became aware of this, the reward program was scrapped, causing the cobra breeders to set the now-worthless snakes free. As a result, the wild cobra population further increased. The apparent solution for the problem made the situation even worse.

There is neither good nor bad news; there is only data. If you punish bad news, you will only get good news—or, more accurately, camouflaged bad news made to look good. Goodhart’s law expresses the same idea: “When a measure becomes a target, it ceases to be a good measure.” (See Figure 3-20.)

A good (or bad) example of something that is easy to count, easy to fake, and meaningless (perhaps even dangerous) to measure is productivity of a single developer by number of lines of code written in a given time frame. By doing this, you substitute growing functionality with lines of code, leading to one of the worst practices in software development called “copy paste programming.”

Figure 3-20 Goodhart’s law visualized

Whatever you start to measure, play devil’s advocate and have a creative brainstorming session about how the metric can be gamed. Try it with a couple of colleagues to increase your chances of surfacing the ingenuity of the people being measured.

I worked with a large organization whose well-intentioned Project Management Organization truly wanted to help the teams undertaking a Scrum adoption. The PMO put in place a rule that any team that varied its Sprint velocity by plus or minus 20 percent would need to meet with a PMO representative, who would see how he could help. Guess what started happening to the velocity metric? The teams did not see this as an act of goodwill but as punishment, and velocity was no longer a value-neutral metric.

QUIZ REVIEW

Compare your answers from the beginning of the chapter to the ones below. Now that you have read the chapter, would you change any of your answers? Do you agree with the answers below?

Statement

Agree

Disagree

There is no value until a product is released.

For for-profit organizations, value is ultimately represented in terms of money (revenue and cost).

Attaching incentives to value metrics improves performance and morale.

Metrics can help validate business hypotheses and the impact of releases.

Velocity is a good measure of value delivered.

A release can produce negative value.

____________________________

1. Steve Denning, The Leader’s Guide to Radical Management: Reinventing the Workplace for the 21st Century (Hoboken, NJ: Jossey Bass, 2010), 57.

2. New World Encyclopedia, s.v. “cargo cult,” accessed March 5, 2018, http://www.newworldencyclopedia.org/entry/Cargo_cult.

3. “History,” Domino’s Pizza, accessed February 17, 2018, https://biz.dominos.com/web/public/about-dominos/history

4. Ken Schwaber and Patricia Kong, Evidence-Based Management Guide (2014), 4.

5. Gallup, State of the American Workplace: Employee Engagement Insights for U.S. Business Leaders (2013).

6. Daniel H. Pink, Drive: The Surprising Truth about What Motivates Us (Edinburgh: Canongate, 2010).

7. Ralph is also a licensed Management 3.0 facilitator.

8. “What Is Net Promoter?,” NICE Satmetrix, accessed February 17, 2018, https://www.netpromoter.com/know/.

9. Schwaber and Kong, Evidence-Based Management Guide, 4.

10. Gerald Marvin Weinberg, An Introduction to General Systems Thinking (New York: Wiley, 1975).

11. Schwaber and Kong, Evidence-Based Management Guide, 4.

12. See Joel Spolsky, “Things You Should Never Do,” Joel on Software (blog), April 6, 2000, https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/.

13. Jamie Zawinski, “Resignation and Postmortem,” March 31, 1999.

14. Forrester, “IT Budget Planning Guide for CIOs,” October 2010.

15. Karthick Arvinth, “VW Scandal: Carmaker Was Warned by Bosch about Test-Rigging Software in 2007,” International Business Times, September 28, 2015.

16. Philippe Kruchten, “The (Missing) Value of Software Architecture,” Kruchten Engineering Services, Ltd. (blog), December 11, 2013, https://philippe.kruchten.com/2013/12/11/the-missing-value-ofsoftware-architecture/.

17. David Medhurst, A Brief and Practical Guide to EU Law (Hoboken, NJ: Wiley, 2008), 203.

18. Patrick Walker, “Self-Defeating Regulation,” International Zeitschrift 9, no. 1 (2013): 31.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
44.204.65.189