© Michael Nir 2018
Michael NirThe Pragmatist's Guide to Corporate Lean Strategyhttps://doi.org/10.1007/978-1-4842-3537-9_5

5. Identify Metrics That Matter

Metrics and Measures Drive Behavior
Michael Nir1 
(1)
Brookline, Massachusetts, USA
 

Metrics are used to drive improvements and help businesses focus their people and resources on what’s important. Moreover, metrics drive decision-making: what should the company concentrate on, what objectives should be prioritized higher than others, how is the enterprise progressing towards the objective, and what measures are required for course correction? These are the questions that every enterprise faces. No resources are limitless, so on top of the necessity to focus on the highest priority objectives aligned with a company’s mission, as discussed in Chapter 3, a company needs an effective mechanism to identify early and objective signs of failure and to assess root causes, whether they are unrealistic expectations, resourcing and skill-related challenges, or poor self-organization and leadership.

What I Read

Metrics are the keys to success for several reasons:
  1. 1.

    Success is defined via metrics. Agilists writing acceptance criteria for the user stories are well familiar with the INVEST mnemonic. T in this mnemonic refers to “Testable;” the acceptance criteria must provide measures of achieving results. If the goal is to improve customer satisfaction, this goal may be never achieved if we strive to an unstated measure of customer satisfaction. However, if we agree to bring the NPS (net promoter score) up to 8 out of 10, we know when to stop the project and consider it a success. Similarly, if we agree to reduce cycle time for a measured baseline from 10 days to 5, we create a shared understanding of what success looks like. The concept of SMART goals emphasizes the role of metrics in aligning the business to customer needs and shareholder satisfaction.

     
  2. 2.

    In order to make decisions, metrics must be objective. In numerous instances, decisions are by the highest ranking decision maker or the loudest voice in the room.1 By having a measure of success, we remove ambiguity from the decision-making process.

     
  3. 3.

    Metrics must be actionable by providing enough information to course correct. If customer satisfaction is not improving as expected, we need to dig deeper into the root cause to understand what drives customer satisfaction and how it impacts the business overall. Eric Ries refers to it as the “three As”: actionable, accessible, and auditable.

     

Metrics and measurements drive behavior. Following a study I once read, I decided to experiment. I placed a plastic board with the word waste measurement above a trash bin next to a production line that was having quality issues. I said nothing in advance and I didn’t share any message about why the board was placed and what management was planning to do with the measurements. The waste in the trash bin was quickly reduced and the quality of production improved.

Every time a measurement is put in place, behavior changes. I am not advocating that you do the same, since often the behavioral changes are not foreseen; however, I want to emphasize the impact measurements have in driving behavior. Actually, sometimes the measurement can run contrary to the desired outcome and many organizations have horror stories of measurements that went awry.

In another organization I worked at, recognition and awards were a big part of the culture. The CTO would come to the desk of an employee who had been nominated for an award and hand her the letter of nomination. There were company-wide awards with mini-movies produced in multiple funny formats, and the recognized employees were flown into the headquarters quarterly to collect these awards as a culmination of a company’s town hall.

What’s wrong with this approach? Recognition is an important part of company culture and employee motivation. There was nothing wrong with it, except for there was no way to nominate a team, only an individual. As a result, it was not uncommon for people to take credit for work performed by others or to pursue initiatives on their own rather than contributing to the work of the team they were part of. It took a while to change this culture, and eventually the company opened award nominations to teams in addition to individuals.

In each case, the key to establishing metrics is: when establishing metrics, the first rule is to promote behaviors the company would like to establish. Besides quantifying business outcomes, metrics shape the culture.

It is also important to balance leading and lagging metrics in decision-making. The following are some examples:
  • Leading indicators: Number of innovations, number of patents, customer satisfaction, brand recognition, cycle time from start to completion of a workflow, growth in new markets

  • Lagging indicators: Net revenue, revenue growth, return on net assets, operating income growth, team velocity, PI predictability measure

Leading indicators allow you to predict success or failure, and allow you to be proactive in applying validated learning to day-to-day business operations. Lagging indicators are important in making decisions on whether to continue to pursue the course of the business or switch investment into different areas.

Metrics collected may be different depending on the nature of the business and the stage it is in. For example, for an existing business, it is important to assess market value, net promoter score, and revenue. For a new business, it is more important to assess growth and increasing efficiency (as it will lead to profitability). Dave McClure’s “pirate metrics” provide a set of validated metrics for any service-oriented business that represents customer behavior: Activation, Acquisition, Retention, Revenue, Referral (AARRR).2

Finally, it is important to realize that not all organizations are metrics-driven. Opponents of metrics frequently state that metrics can be easily gamed. As always, the truth is in between. Obviously, the way we measure data influences the numbers. For example, even such an objective metric as the cycle time (how long something takes from start to finish) can be gamed by how the cycle is defined (e.g. if a process to address a customer request starts when the request is made by the customer or when it has been prioritized for execution by a service desk; the first could expose weeks of “gap” time while the second could take hours and even minutes).

However, how this metric is defined and measured can be controlled and agreed upon. Eric Ries refers to it as “auditable” metrics, meaning that everyone involved in the initiative should be able to see the reports and understand them. Data has to be collected in a uniform and transparent way. This is another key to successful metrics that drive informed decisions: mapping metrics to a limited number (3-5) of pain points and measuring each parameter consistently in the agreed-upon way with a clear measure of success. I have orchestrated numerous ways of automating metrics collection using a number of workflow tools, lifecycle management, and data virtualization tools. If there is one investment a company can afford, I would advise to start with this one. If there is no way to measure progress objectively, no initiative will be successful (for the reason of not being able to measure success or failure).

What Happened When I Tried Implementing in Big Corporations

Large enterprises rarely argue the value of metrics. However, there are two frequent problems:
  1. 1.

    Failure to recognize the value of metrics as “validated learning:” One of the lean startup principles. Metrics are not a reason to punish employees for failing but rather an opportunity to make business decisions as early in the game as possible.

     
  2. 2.

    Vanity metrics: In a large test prep company, there were about 20 parameters that were measured around the organization: student progress test by test, student and parent satisfaction, teacher availability, individual tutor rating, UI efficiency, rating against competitors, and many others. A lot of time and effort was invested in data collection, analysis, and resulting actions. At the same time, the company was not addressing the changing nature of the educational landscape and the major disruption happening in the education industry by Khan Academy, Coursera, and open MIT courses. The large number of parameters were distorting the real picture and impeding leadership from decision making.

     

It is important to collect the right metrics to inform sound business decisions and drive the desired behaviors, and as simply and clearly as possible. For metrics that do not inform the right decisions or address the root cause, lean startup uses the term “vanity metrics:” numbers that look as good as possible but do not reveal the truth and prevent the right business decisions rather than inform them.3

Anti-Pattern

A big data company that providing financial services was concerned that its large-scale initiatives, though deemed successful, were not bringing the expected business success. As a coach and consultant, I was initially tempted to dig deeper into how business goals were set up and how the pain points and business opportunities were identified. I spent limited time reviewing business initiatives, customer feedback, and company balance sheets. However, then I looked at the percentage of successful projects. Despite lower industry averages, a whopping 90% of this company’s projects were reported as successful. There were, however, multiple surprises when a project that had been reported as “green” throughout delivery in fact failed miserably when it was supposed to be delivered or, even worse, when it was delivered to production. Upon further analysis, I found out that the company had a history of firing leaders and team members who were involved in the projects that were reported as failing. As a result, no one wanted to be the messenger in delivering honest and transparent metrics throughout execution. What’s the moral of this story? If there is no safe space, any metric is questionable.

As an opposite example, an engineering leader in a health industry giant I worked at decided to take engineering practices within his organization to the next level. To do so, he formulated a number of objectives: implement test automation, move applications to the cloud, increase component-based system design including microservices, and implement a number of other objectives specific to his area of responsibility. Despite the standard advice of limiting the number of objectives, he decided on a total of 12 annual goals he wanted this organization to achieve. He felt that it was important to make the progress visible, so he created a board (a simple PowerPoint slide with a 12x12 matrix with each cell colored red, yellow, or green plus a physical board in each physical location of his highly distributed team) that he called “Get to Green” and created clear measurements for each: what red, yellow, and green looked like for each division within his organization. Then he invited a representative from each division to form a continuous improvement group that baselined each of the divisions and started meeting monthly to review progress.

At this point, you might think that there was some unhealthy competition in this group or that the reviews were not objective. Nothing like that! I was invited to facilitate these conversations and each of these meetings resulted in a valuable conversation between professionals who shared advice, supported consistent tooling, training, and methodology, and moved from being primarily “red” to being uniformly “green” within one year on each of these 12 parameters–all while the teams were competing, celebrating “getting to green” in each of the 12 parameters, and gamifying this journey on top of learning new skills and implementing new technologies.

Best Practice

Jeff Patton4 discusses output-outcome-impact metrics in User Story Mapping. The four disciplines5 of execution emphasize lead metrics rather than lag metrics. I found that the most effective metrics that lead to enterprise success are a combination of both approaches, the so-called impact lead metrics. Contrast that with what organizations traditionally focus on when implementing agile (team velocity metric) or when implementing scaled agile (train velocity metric). In both cases, the metric is a lag output metric; it is a delayed indicator of performance. On top of which, it is an output measure; high or low team velocity tell us nothing about the actual outcome of the delivered work and in no way informs us of the impact that the delivered work item has.

What I Learned As I Adapted the Concepts at Corporations

Bigger organizations are inherently different than smaller organizations, a truth that is easy to forget. As a result, these lessons from lean startup are often adapted to organizational realities:
  1. 1.

    In the rush to transform the organization, we often forget to ask ourselves what metrics would show success and what would be lead metrics that would guide us, helping us to know we are headed in the right direction.

     
  2. 2.

    Many agile transformations embrace team velocity as one of the three top metrics and promise executives an ongoing improvement of velocity. One may ask why velocity is the wrong metric. Actually, it is not! Focusing on velocity as a metric is similar to focusing on output rate in production; it is a wrong metric to meaure and celebrate. However since it is so easy to measure, organizations prefer to focus on it rather than explore leading metrics that are impactfull.

     
  3. 3.

    In a recent scaled agile implementation, senior executives joined the big room planning event and asked the teams why their velocity was low and how they could produce more in the next quarter. When we analyzed the work the teams were producing, we were concerned that the work itself was the wrong work. The teams were focused on developing whimsical requests by key business owners rather than the core functionality of the products. By focusing on velocity (the team output) we failed to notice that we were not creating the required outputs that would lead us to sustained impacts in the market. I found this to be a recurring pattern in big organizations. Actually, the bigger the organization and the more silos in place, the more metrics that are local lagging output driven rather than metrics that shape the success of the organization.

     
  4. 4.

    We learned that we needed a framework to validate the metrics, both internal and external. We classified metrics as output-outcome-impact and examined the metrics currently in place. We broke them down into categories. Most metrics were internal output lag metrics. Velocity is a good example: it measures the rate of task completion, so once it is measured we can no longer affect it. We found that more than 75% of the metrics were output metrics and 90% were lag metrics.

     
  5. 5.

    We challenged ourselves to develop a few lead metrics that measured outcome and impact.

     

Tip

Reflect: What are the meaningful metrics in your organization? What data is currently collected? How does it drive desired behaviors? Which are vanity numbers and which are meaningful, in your opinion? How does this inform decision-making? Is it visible throughout the organization? How does it cascade throughout all structures?

Once you answer these questions, identify any improvements throughout the organization and think of how it collects and uses data. Share these ideas with your peers and the management team.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.253.62