CHAPTER 6

EXECUTION AND MEASUREMENT

Taking Action (Finally!)

To build vision-driven products, we must ensure that our vision and strategy are closely tied to our everyday actions and how we measure success. The story of Nack illustrates how products can go awry when the vision becomes disconnected from tactics and how we can stay vision-driven using the RPT approach to execution and measurement.

I first met Nack’s founder, Paul Haun, over coffee when he told me about his company with his characteristically infectious enthusiasm. Haun started Nack determined to spread kindness around him through “random acts of coffee.” He was inspired by the tradition of “suspended coffee” that started in Italy, where you’d buy one coffee and pay for two—the second paid forward for someone who could use a random act of kindness.

Haun built Nack as a mobile app and iterated on features to consistently delight users. He had read case studies on how the iPhone was an iconic product because it delighted customers and read the book that explained how Zappos found success by delivering happiness. Armed with these lessons, Haun, like many entrepreneurs, was iterating with the goal of delighting customers.

His app allowed users to find suspended coffees around the city and also create “random acts of coffee” by paying for suspended coffees. Nack’s users were using the app almost daily, recommending it to their friends and frequently inviting others to join. As a result, Nack had enviable usage metrics including Net Promoter Scores, time spent on the app, and number of daily users.

However, although these popular metrics were pointing up and to the right, Haun’s enthusiasm turned to dismay when he shared how things were going. It turned out that Nack users were just delighted by the free coffees—they were logging in every day to search for free coffees and driving distances to claim them. They weren’t, however, paying it forward or spreading kindness through the app. Despite delighting customers through his iterations, Haun’s product wasn’t creating the change he wanted to bring to the world.

Conventional wisdom is that to build successful products, you test your features in the market and iterate based on what customers want—you have to be “customer-driven.” In reality, getting customer feedback is like asking for directions when you’re in the car—it helps you navigate better. As the driver, however, you have to know your destination to be able to get directions. Nack’s features were customer-led rather than customer-driven.

Nack’s loudest customers were the ones who complained that there were no free coffees in their vicinity. In trying to continue to delight customers, Haun had spent over $1,500 of his own money to fund suspended coffees through Nack. Yet he was no closer to spreading random acts of kindness (except his own).

Haun needed to connect his execution and measurement to his vision. In shifting his mindset to a vision-driven approach, Haun defined his product vision as promoting kindness among coffee drinkers, and Nack was his mechanism for bringing about this change. To deliver on this vision, the strategy focused on teaching users to gift coffee as a way of showing kindness. In translating this to action, Haun rebuilt Nack with a new set of features: whenever users received free coffee, they would always receive two—one to consume and the second to gift.

Brands wanted to be part of this movement to promote kindness and offered to fund suspended coffees. Users learned to give coffees that were funded by brands, but the feature created a fundamental change in user behavior. Users found joy in giving, and soon 27 percent of the users who gifted coffees funded by brands were using Nack to buy someone a coffee using their own money!

Instead of delighting users because they received free coffee, the new Nack made users feel good because they were sending someone a coffee. By translating a clear vision and strategy into execution and measurement, Nack was delighting users with the goal of creating the change Haun had envisioned.

This chapter will give you practical tips and tools to translate your vision, strategy, and priorities into execution and measurement. It’ll help you make the best of vision and iteration through a hypothesis-driven approach to execution.

HYPOTHESIS-DRIVEN EXECUTION AND MEASUREMENT

Your product is a constantly improving mechanism to create change. To decide what to improve, organizations emphasize the need to make data-driven decisions. A data-driven approach to building your product is great—but only if you’re measuring the right things. Data-driven is often taken to mean that the business and product are driven by metrics. Unfortunately, too often the metrics used are simply those that are easy to measure (e.g., registered users as a proxy for usage) or popular to measure. When products are driven by the wrong metrics, they catch Hypermetricemia.

Your product may seem successful when you track popular metrics such as daily active users, how likely your users are to recommend your product (Net Promoter Scores), and revenues. But each of these popular metrics comes with assumptions that may not hold true for your business.

A vision-driven product is not justified in itself; because it’s a mechanism to create the change you envision, it’s successful only if it’s bringing about that change. This is why in the RPT way, instead of measuring popular metrics, you have to measure what’s right for your organization.

The template in figure 6 can help you work through your execution and measurement plan. The main goal of this template is to help you identify the connection between what you’re testing and what you’re measuring—your hypotheses. You can write a hypothesis as a fill-in-the-blanks statement:

If [experiment], then [outcome], because [connection].

The hypothesis derived from Nack’s strategy of teaching users to gift coffees would be written as follows:

If [we give users two coffees, one of which they must gift], then [they’ll start using their own money to gift coffee], because [they’ll learn to gift coffee and enjoy it].

Once you have a hypothesis in place, you can measure key metrics to know if the outcome was what you expected and if your experiment (or strategy) is working. For Nack, the key metric that indicated progress toward the vision of spreading kindness was the percentage of users who were spending their own money on gifting coffee.

The activities column in the template helps you list the tasks needed to set up your experiment and test the hypotheses you’ve stated. At Nack, before we could test our hypothesis, we needed to partner with brands to fund coffees as part of their marketing campaigns. We also had to develop the features to enable users to receive and gift coffees.

images

FIGURE 6: Radical Product Thinking template for a hypothesis-driven approach to execution and measurement

When you use this template, you can think about what metrics would indicate progress toward your vision and strategy. When you think about your vision as a hypothesis, you can measure progress by asking, “What metrics would indicate whether we’re bringing about the world we describe in this vision?”

Similarly, each element of your strategy represents what you think will work. Your experiments and metrics will validate that. For each element of your RDCL strategy, you’ll want to test your approach through an experiment and track metrics that indicate whether your actionable plan is working. Remember to go back and update your RDCL strategy as you learn from your iterative execution and measurement.

Here’s an example of how you can use the execution and measurement template. At Likelii, we wanted to help users find wines they were likely to like. One of the elements of our RDCL strategy was our design to understand users’ taste preferences without scaring them off. Our hypothesis was this:

If [we ask users to name a wine they like], then [most users will answer the question], because [it takes little effort to enter the name of a wine and they can get personalized recommendations right away].

To test this hypothesis, we tracked a key metric: the number of users who were answering this question. Unfortunately, it turned out that users often had a hard time recalling the name of a wine they had enjoyed—only 20 percent answered this question! Our strategy of asking them to name a wine to get their taste preferences wasn’t working.

To improve our strategy we developed the following hypothesis:

If [we create a simpler quiz to understand users’ taste preferences], then [users will complete the quiz], because [unlike our original approach of asking them about their favorite wine, the new quiz doesn’t create cognitive load].

To test the above hypothesis, our activities included crafting a short quiz with pictures to get their taste preferences. To understand how tannic they liked their wine, we asked them how they liked their tea or coffee: black, with milk, or with milk and sugar. To understand their preferences on acidity, we asked which fruits they liked in their fruit salad. We deduced their tastes from simple questions.

When we launched this quiz, we found that our simplified approach was working—over 70 percent of users completed this quiz! Our measurements and iterations were driven by our strategy.1

Writing a hypothesis for each feature or element of your RDCL strategy may seem tedious, but you’ll find that this hypothesis-driven approach becomes a way of thinking. As you’ve seen in previous chapters, the goal with RPT is to create intuition. The above template is designed to give you practice in thinking more deeply about metrics. Once you’ve developed muscle memory, this technique will become second nature and will feel like intuition. You’ll begin to formulate a hypothesis in your mind every time you add a feature to your product or start a new strategic initiative in your company.

USING RADICAL PRODUCT THINKING WITH ITERATION

The examples of hypothesis-driven execution in this chapter illustrate that Radical Product Thinking pairs well with feedback-driven execution methodologies such as Lean Startup and Agile.

A hypothesis-driven approach means starting with the mindset that your vision and strategy are hypotheses. Radical Product Thinking helps you define and communicate what you’re building and why. Lean and Agile help you execute, learn, and iterate under uncertainty. As you learn from your hypothesis-driven execution, you’ll go back and refine your strategy, and possibly your vision, based on these learnings, as illustrated in figure 7.

To avoid becoming iteration-led, you must ensure that your Lean and Agile activities are driven by your vision and strategy. For example, Lean Startup emphasizes launching a minimum viable product (MVP), a version of a product with just enough features to satisfy early customers and provide feedback for future product development. It’s important to think about your RDCL strategy when planning your MVP.

images

FIGURE 7: How Radical Product Thinking fits with Lean and Agile execution

Most likely you’ve heard the generalization that an MVP must be scrappy—often, it’s followed by a quote from Reid Hoffman, founder of LinkedIn, saying, “If you are not embarrassed by the first version of your product, you’ve launched too late.” This may be true for some markets, but it really depends on the RDCL strategy and the real pain points.

The key criterion for your MVP is that it must be viable as a solution to satisfy early customers. For example, at a robotics and warehouse automation company, the equipment was mission-critical to clients. If the system broke down, the customer’s warehouse came to a standstill and lost money because of delays in outgoing deliveries. As a result, what was minimally viable to customers was a well-developed system with high uptimes and reliability. Compare this to a phone app for a shopping list—you could afford to start with a very frugal MVP. Your MVP should be derived from your strategy and meet the real pain points that are most important to your customer segment.

The nature of your MVP, in turn, will affect your strategy. For example, if you were building a warehouse automation solution like the one above, your startup would need to raise a large round of funding to deliver such a fully viable product as the initial offering.

You can also use RPT with your Agile development processes to build your product incrementally. If you’re using Agile, sometimes the loudest customer decides what you’re going to build next. This effectively leads to a “micropivot” every few weeks based on what features bubble to the top as most urgent. As the vision often becomes disconnected from day-to-day activities, your product is at risk of becoming a muddled mess of contradictory features and functionality.

You can avoid this risk by using the RPT approach for execution and measurement to communicate your hypothesis and the experiment you were running, what you learned, and how it’s shaping the next set of experiments you’re going to run. You can also use the RPT approach to prioritization as you plan what you’ll build in your next increment—you can use the two-by-two vision-fit-versus-survival rubric to balance progress toward the vision and short-term business needs as you prioritize tasks and features.

As you learn from the experiments, you may discover the need to course correct or change your direction more dramatically. You could formalize this communication by setting up a regular cadence for reviewing your vision and RDCL strategy—for example, you may consider doing this every month as an early stage startup or every six months for a more mature product. Taking this approach helps you stay vision-driven as you continuously refine your product.

THE DANGER OF SETTING GOALS FOR PRODUCT METRICS

RPT defines a product as a continuously improving mechanism to create the change you intend. Once you know what metrics are important, you may be tempted to think that building a successful product is a matter of setting specific goals for your product metrics. After all, conventional wisdom says if you want to achieve something, you have to set measurable goals.

I often see product metrics used in setting Objectives and Key Results (OKRs), a framework used by many companies to define goals across the organization, assign responsibility, and track outcomes, for example, “Get over 20,000 new signups.” In setting OKRs, teams are given the instruction to be aspirational and set ambitious goals.

Ironically, the goals that were designed to be aspirational become demotivating. Even high-performing individuals who are passionate about their product will advocate for less ambitious goal setting because of the fear of failing to achieve those goals.

In a joint paper titled “Goals Gone Wild: The Systematic Side Effects of Overprescribing Goal Setting,” researchers from Eller College of Management, Harvard Business School, Kellogg School of Management, and the Wharton School recommend that “goal setting should be prescribed selectively, presented with a warning label, and closely monitored.”2 They found that although specific, challenging goals can produce positive results, these same characteristics of goals often cause them to degrade employee performance, shift focus away from important but nonspecified goals, harm interpersonal relationships, corrode organizational culture, and motivate risky and unethical behaviors.

When you are building products, setting goals for product metrics is particularly contraindicated. The process of building a product is filled with uncertainty where there are few right answers. Studies have found that on a complex task where a correct strategy wasn’t obvious and when performance was more a function of strategy than of task effort, do-your-best instructions led to better results than specific goals.3 In such cases, they found that specific goals may discourage experimentation and adaptive behavior and ultimately limit innovation.4

Another problem with setting goals for a few product metrics is that it narrows the focus to optimizing just those few metrics. To build successful products, you may have several hypotheses on what you could do better for your user, and as a result, you may be measuring and analyzing a large number of metrics. But OKRs are designed to get you to focus on just a few key metrics. Employees may optimize for those narrow measures of success, but this may come at the expense of other KPIs that you’re not measuring. OKRs may be helping you reach the local maximum instead of the global maximum. In fact, researchers found that when individuals were given specific, challenging goals, it inhibited their learning from experience and degraded their performance compared to being given the simple instruction “do your best.”5

The case against goal setting becomes even more damning when it comes to stretch goals. Studies have found that the use of goal setting for “management by objectives” creates a focus on ends rather than means. Researchers found that people who were given specific goals were more likely to engage in unethical behavior than people who were told to do their best. Even more importantly, they found that the relationship between goal setting and unethical behavior was particularly strong when people fell just short of reaching their goals.6

The example of Lucent Technologies’ scandal in 2000 illustrates this adverse effect of stretch goals: the company reported that it had overstated its revenues by nearly $700 million. Richard McGinn, the former CEO of Lucent, was known for pushing audacious goals on his managers and had set the goal of achieving 20 percent annual revenue growth—an enormous target for a company with $30 billion in assets. Revenue magically appeared in each quarter, and Lucent committed $8 billion to “customer financing”—in essence, the company was giving away its product and labeling the transaction a sale.7 In a complaint letter, a former Lucent employee charged that McGinn and the company had set unreachable goals that caused them to mislead the public.

We repeatedly see how goals lead to behavior that’s not good for society. At Wells Fargo, executives developed a strategy of cross-selling products to their customers to increase their “share of wallet” with each customer. As part of this strategy, branch managers were assigned aggressive quotas for the number and types of products sold. If the branch did not hit its targets, the shortfall was added to the next day’s goals. In 2016, the scandal broke that to meet these aggressive targets, employees had been opening new accounts without customers’ knowledge—sometimes this even included forging signatures. In February 2020, Wells Fargo agreed to pay $3 billion in fines to settle the long-running probes into its fraudulent sales practices.8

Several studies have raised the possibility that people would resort to unethical behavior to reach goals, but these effects have been consistently ignored. Even in the seminal book on goal setting, authors Edwin Locke and Gary Latham predicted this effect, noting it as “the unintended dysfunctional effects” of goal setting. But they propose superficial solutions including creating “control systems” and firing employees who violate ethics to reach goals “regardless of any revenue streams they generate or costs they reduce.”9 Given that setting goals predictably leads to unethical behavior in both theoretical research and empirical studies, by continuing to use goals for management by objectives, we’re ignoring evidence to perpetuate a system that damages performance and incites bad behavior while expecting different results.

Awareness of the perverse effects of goal setting is increasing, and some companies are modifying their approach. A minor step is divorcing OKRs from performance appraisals. In an article by Evan Schwartz, Laszlo Bock, senior vice president of people operations at Google from 2006 to 2016, explains why OKRs shouldn’t be tied to performance: “Google tied OKRs for usage of a product directly to people’s compensation. People started gaming the system to get their bonuses. The very idea of tying monetary incentives to hitting key results was thus deemed detrimental to both the product and the broader culture.”10 Bock and Google popularized OKRs, but to deal with the side effects, they recommend not using OKRs for performance management.

Unfortunately, divorcing OKRs from performance appraisals isn’t enough—even when OKRs aren’t tied to monetary incentives, the process of setting OKRs requires listing who is responsible for achieving a particular goal. This means if a goal is not achieved, everyone is aware of whose failure it was—it implicitly ties performance to achieving goals.

Another approach to deal with the side effects that Bruce McCarthy, author of Product Roadmaps Relaunched, recommends in his workshops on OKRs is to remember that OKRs can be recalibrated. If you see that the OKRs you set are the wrong indicators of progress or that they’re impossible to achieve, you should change them.

Smaller companies may be able to recalibrate, but in larger companies OKRs are often coordinated across divisions—getting buy-in and setting OKRs every year takes incredible effort. If you discover during the year that some of the goals were set incorrectly, how much of the coordination and realignment are you willing to revisit? The prospect seems daunting. In fact, an executive in a large organization responded to the suggestion of readjusting OKRs periodically by saying, “We’d die if we had to do this many times a year.” OKRs, once set, are hard to adjust, and teams may end up working toward a goal even when it’s clear that it’s not the right measure of success.

Indeed, this is what Spotify stated when it announced on its HR Blog that it’s no longer using OKRs:

What went into the OKR process was often already outdated when we got that far. So the OKRs that came out were too.

We noticed that we were putting energy into a process that wasn’t adding value. So we decided to ditch it and focus on context and priorities instead. We make sure everyone knows exactly where we are going and what the current priorities are, and then we let the teams take responsibility for how to get there.11

This sounds remarkably aligned with the RPT approach of defining a clear vision and strategy and translating it into priorities and execution.

Ironically, setting performance goals and OKRs can drive pursuits of local maxima and distract us from finding the global maximum. It’s time to abandon the approach of using product metrics to set performance goals. It’s time for a more radical approach.

MEASUREMENT THE RPT WAY

The RPT approach to measurement is designed as a collaborative approach to help your team to continuously learn and improve your product in an ethical manner. This means aligning the team on what metrics indicate whether your product is creating the change you intended and then managing progress through regular feedback.

OKRs too were intended to create alignment by quantifying the impact the organization strives to create. Most organizations have broad vision statements and haven’t yet transitioned to Radical Vision Statements. In the absence of a detailed vision, OKRs offer a detailed narrative of the desired impact, but these come with side effects. To achieve the same alignment without the side effects of goal setting, start by creating a Radical Vision Statement.

The RPT approach to crafting a vision gives teams a clear picture of the world you’re collectively bringing about, and the RDCL strategy helps you translate that vision into an actionable plan. You can use the vision and strategy to align the team on the direction and magnitude of the change you want to bring about. By running these cocreation sessions as group exercises, you’ll have the team’s participation and buy-in on where you’re going.

Once you have a clear vision and strategy, you can list the key metrics that would indicate progress—just refrain from setting targets. Periodically you’ll need to review whether these metrics continue to be the right ones to measure.

If you’re changing from measuring popular metrics to what’s right for your organization, you may need to educate your team and investors on how you measure success. It’s easy to fall prey to setting your measurement strategy based on how investors or stakeholders might define traction. In an organization where ease of use was defined as “anything should be one click away,” the team had developed a website where any information you needed was one click away on the home page—if you could find it, that is.

When the team built a replacement for this product that organized information well, many elements that used to be on the home page were moved behind a tiered menu. Success for us wasn’t about the number of clicks it took users to find what they were looking for but whether people spent less time on the site because they found what they needed very quickly. In making these changes, we needed to communicate a change in how we were measuring success. This alignment is also important because measurement costs time and resources—both to build the ability to capture the data and analyze the data when it becomes available.

Once you have alignment on what to measure, you can begin to share and discuss product metrics. Organizations often use OKRs to assign responsibility for achieving specific metrics and to hold people accountable for outcomes. To achieve accountability without the adverse side effects of a goals-based approach, you can have product teams present product KPIs at regularly scheduled update meetings.

It’s important to create a collaborative setting where you celebrate successes and, equally importantly, where teams feel comfortable in openly sharing what could be improved and build on one another’s ideas.

Teams have more stats and inside knowledge on their product than higher-level management—if teams feel like they’re going to be punished for the metrics they present, managers will get only a selective exposure to favorable metrics or overstated results. Creating a collaborative setting for an open discussion around metrics requires a culture with psychological safety that encourages learning behavior (more on this in the next chapter).

To create such a setting, managers can give teams regular feedback on metrics instead of managing progress by setting goals for what the team should achieve. You’ll need to develop a joint understanding of the baseline metrics today and talk about what improvements you want to see, how fast you want to see those improvements, and how that affects priorities. While goal setting and management by objectives are analogous to an end-of-the-year exam, regular feedback cycles throughout the year create opportunities for continuous learning.

Teams that use OKRs spend long hours negotiating these goals. Instead, you can allocate those same hours to regularly scheduled cross-functional discussions where teams present metrics, share their learnings, and get feedback and suggestions from others across the organization. These discussions achieve increased alignment as well as accountability.

Lisa Ordóñez, the lead researcher on “Goals Gone Wild,” is now the dean of the Rady School of Management at the University of California, San Diego. Even with her administrative responsibilities, she has kept up with academic research on goal setting and shared the following with me after reading about the RPT approach to measurement: “My research has revealed the negative impact of goal setting, especially in promoting unethical behavior. One reaction might be to eliminate goals and metrics entirely. However, the Radical Product Thinking approach to measurement allows organizations to align priorities and use metrics in a productive way. It retains the best part of goal setting (directing and aligning actions) without the negative aspects.”

Your product is your constantly improving mechanism to bring about the change you envision in the world. Radical Product Thinking helps you deeply connect your execution and measurement to your vision and strategy so you can bring about that constant improvement. In the next chapter, we’ll talk about how you can use this new way of thinking to cultivate a culture that facilitates building vision-driven products.

• In the Radical Product Thinking way, vision and strategy drive hypothesis-driven execution and measurement. Instead of measuring what’s popular, measure what’s right for your organization.

• This means creating a series of hypotheses and setting up experiments to validate your vision and RDCL strategy.

• RPT is often used together with Lean and Agile methodologies.

• The execution and measurement template includes three elements:

1. Key metrics: These are the key indicators of whether your approach is working.

Think about what metrics indicate progress toward your vision.

What metrics will you measure to know if each element of your RDCL strategy is working?

2. Hypotheses: Your hypothesis identifies the connection between what you deliver and the metric.

You can write a hypothesis using the following Mad Libs statement: If [experiment], then [outcome], because [connection].

3. Activities to set up the experiment: In this section of the template you can identify the tasks needed to set up your experiment.

If you’re using an Agile development process, these activities drive your Agile Sprint.

• Setting goals for product metrics is tempting, but you must resist. The RPT way is a collaborative approach to measure and learn as a team.

• To align your team on metrics and replace goal setting through OKRs, you can take these three practical steps:

1. Align on what you’ll measure.

2. Create a safe environment to discuss metrics.

3. Manage progress through regular feedback.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.151.106