5

Plan for Change and Uncertainty

When is work finished? For most of us, it seems pretty simple: it means getting our work done. We learn this early in our school lives: finish your homework. Do your chores. When you’re done, you get to stop working, get to play with your friends, read your books, watch a movie, and so on. We take this idea into our workplaces: finish that report. Do your rounds. Go to a meeting. When you’re done? “Quittin’ time!”

But we need to take a step back and consider what “done” really means. Does it mean that we’ve shipped a product or launched a service? Does it mean that it’s making money for the company? Oddly, usually not. It’s usually a few steps back from that. Sometimes it means, “We’ve built the thing you specified in the contract.” Or sometimes it means, “We’ve written software, tested that it works, and deployed it to a server.”

Usually, though, it doesn’t mean, “We’ve finished making something that we know adds value to the business.”

This is an important distinction, and we need to be clear about the differences. Most teams in business work to create a defined output. But creating an output is not the same as being successful. Just because we’ve finished making a thing doesn’t mean that thing is going to create value for us. If we want to talk about success, we need to specify the target state we seek. Let’s call that desired success an outcome.

For example, we may ask a vendor to create a website for us. Our goal might be to sell more of our products online. The vendor can make the website, deliver it on time and on budget, even make it beautiful to look at and easy to use, and it may still not achieve our goal, which is to sell more of our products online. The website is the output. The project may be “done.” But if the outcome—sell more products—hasn’t been achieved, then we have not been successful.

This may seem rather obvious, but if you look at the way most companies manage digital product development, you’d be hard- pressed to see these ideas in action. That’s because most companies manage projects in terms of outputs and not outcomes. This means that most companies are settling for “done” rather than doing the hard work of targeting success.

Defining Done as Successful

Do companies really manage for done instead of success? And if so, why would they do that?

It turns out that there are some situations when these ideas are the same thing or have such a clear and well-understood relationship that they might as well be the same thing. This is frequently the case in industrial production. Because of the way industrial products are designed and engineered, you know that when your production line is spitting out Model T’s, you can be reasonably certain they will work as designed. And, because of years of sales history, you can be reasonably certain that you will be successful: you will sell roughly the number of cars you forecast. Managers working in this context can be forgiven for thinking that their job is simply to finish making something.

With software, however, the relationship between we’re finished building it and it has the effect we intended is much less clear. Will our newly redesigned website actually encourage sharing, for example, or will the redesign have unintended consequences? It’s very difficult to know without building and testing the system. And—in contrast with industrial production—we’re not making many instances of one product. Instead, we’re creating a single system—or set of interconnected systems that behave as one system—and we are often in the position of not knowing whether the thing we’re making will work as planned until we’re “done.”

In Uncertainty, Specifying Output Doesn’t Work

This problem of uncertainty, combined with the nature of software, means that managing our projects in terms of outputs is simply not an effective strategy in the digital world. And yet, our management culture and our management tools are set up to work in terms of outputs. To consider one example, let’s look at how companies typically purchase software from a third-party vendor.

In a typical process, we might commission an internal team to develop a request for proposal (RFP). This RFP would be based on some analysis of the business problem, would specify the nature of the solution and provide a list of requirements—typically features of the system—and request that vendors submit proposals.

Based on the RFP, vendors will submit proposals, typically specifying how they will go about building the solution: how long it will take, who will work on it, how much it will cost, and, of course, why the vendor is uniquely suited to doing this work.

Once we select a vendor, we then write a contract based on (1) the requirements we developed and (2) the price and time line that the vendor promised. When we sign the contract, both parties are committed to a project based on output. The vendor is committed to building a set of features—in other words, being done—rather than committed to creating something successful.

Identifying the Problem with Output

Of course, if you’ve purchased custom software with a process like this, you know what happens in this scenario. The vendor does not deliver as promised. Why? One veteran IT manager put it this way: “The problem is fixed-price contracts,” he told us. “Both of you are fooling each other that you understand the problem.” As a result, then, everyone must adjust when the true nature of the problem becomes clear. The result of the adjustments? The vendor is late or over budget. This IT veteran continued, “There’s always a problem at the end, and, instead of solving the problem or improving the product, you end up fighting about who is going to pay for it.”

Using the Alternative to Output: Outcomes

The old cliché in marketing is true: customers don’t want a quarter-inch drill; they want a quarter-inch hole. In other words, they care about the end result and don’t really care about the means. The same is true for managers: they don’t care how they achieve their business goals; they just want to achieve them.

In the world of digital products and services, though, uncertainty becomes an important player and breaks the link between the quarter-inch drill and the quarter-inch hole. Some managers try to overcome the problems caused by uncertainty by planning in increasingly greater detail. This is the impulse that leads to detailed requirements and specification documents, but, as we’ve come to understand, this tactic rarely works in software.

It turns out that this problem—the way our plans are disrupted by uncertainty, and the fallacy of responding with ever-more-detailed plans—is something that military commanders have understood for hundreds (if not thousands) of years. They’ve developed a system of military leadership called mission command, an alternative to rigid systems of leadership that specify in great detail what troops should do in battle. Instead, mission command is a flexible system that allows leaders to set goals and objectives and leaves detailed decision making to the people doing the fighting. Writing in The Art of Action, Stephen Bungay traces these ideas as they were developed in the Prussian military in the 1800s and describes the system that those leaders developed to deal with the uncertainty of the battlefield.1

Mission command is built on three important principles that guide the way leaders direct their people:

  • Do not command more than necessary, or plan beyond foreseeable circumstances.
  • Communicate to every unit as much of the higher intent as is necessary to achieve the purpose.
  • Ensure that everyone retains freedom of decision within bounds.

For our purposes, this means that we would direct our teams by specifying the outcome we seek (our intent), allowing for our teams to pursue this outcome with a great deal of (but not unlimited) discretion, and expecting that our plans will need to be adjusted as we pursue them.

Managing with Outcomes

Let’s look at an example of how one team we worked with put these principles to work. In 2014, Taproot Foundation wanted to create a digital service that would connect nonprofit organizations with skilled professionals who wanted to volunteer their services. Think of it as a matchmaking service for volunteers. Taproot Foundation had to work with outside vendors and ended up choosing our firm for the project.

In our early conversations, Taproot leaders described the system that they wanted to build in terms of the features of the system: it would have a way for volunteers to sign up, it would have a way for volunteers to list their skills, it would have a way for nonprofit organizations to look up volunteers based on these skills, it would have a contact system for organizations to reach out to volunteers, it would have a scheduling system to allow the parties to arrange meetings, and so on. We were concerned about this feature list. It was a long list, and although each item seemed reasonable, we thought we might be able to deliver more value faster with a smaller set of features.

To shift the conversation away from features, we asked, “What will a successful system accomplish? If we had to prove to ourselves that the system was worth the investment, what data would we use?” This conversation led to some clear, concrete answers. First of all, the system needed to be up and running by a specific date, about four months away. The foundation participates in an annual event to celebrate the industry, and executives wanted to have a demonstrated success they could show off to funders at that event. We asked, “What does up and running mean?” Again, the answers were concrete: we need to have X participants active on the volunteer side, and Y participants active on the organization side. Because the point of the service would be to match volunteers with organizations so that they could do projects together, we should have made Z matches, and a certain percentage of those matches should have yielded successful, completed projects.

This was our success metric: X and Y participants; Z matches; percent of completed projects. (We actually set specific numerical targets, but for this telling, we’re using variables.)

Next we asked, “If we can create this system and achieve these targets without building any of the features in your wish list, is that OK?” This was a harder conversation.

The executives signing the contract were understandably concerned: what guarantee did they have that we would complete the project?

This is the bind that executives and managers face: as they negotiate with partners, they are bound to protect their organizations. They need to find contractual language that ensures the partners will deliver. The problem with contracts, though, is that to make them work, managers are forced to settle for the protection they find in the concrete language of features: you build feature A, and we will pay you amount B. But this linguistic certainty is a false hope. It guarantees only that your vendor will get to “done,” as in, “The feature is done.” It does not guarantee that the set of features that you can describe in a contract will make you successful. On the other side, vendors are understandably hesitant to sign up to achieve an outcome, mostly because vendors rarely control all of the variables that contribute to project success or failure. Thus, both sides settle for a compromise that offers the safety of “done” while at the same time creating constraints that tend to predict failure rather than create the freedom that breeds success.

Our contract with Taproot, then, contained not only a list of desired features but also a list of desired outcomes. Here are the outcomes we wrote into the contract:

The system will connect volunteers to organizations [at the following rate]. It will allow these parties to find each other, complete the communications needed to decide to work together, complete projects together, and report on the success of those projects. It will do so at [the following rates] and by [the following date]. If at any time, the team decides together that the desired outcomes are better served by building a different set of features than the desired features listed above, they may do so.

This language is a paraphrase—there was more legalese—but this was the essence of the agreement. This compromise—listing the features we thought were important, but being clear about outcomes and agreeing in advance that outcomes are more important—is the key to managing with outcomes instead of output.

It’s worth acknowledging here that many organizations have little flexibility in terms of project funding processes and procurement rules, so this type of contract may be out of reach for some managers. But as we discuss in chapter 3, forward-thinking organizations are working from within to change that.

Seeing Results

So how did the project play out? First, the team decided that the most important milestone was to get the system up and running. Rather than wait four months—the length of the project—to launch, the team members decided to launch as quickly as possible. As it turned out, they were able to go live to a pilot audience within about one month. They launched a radically simplified version of the service, one with very few automated features. Most of the work in the system was done by a person, a community manager, playing a behind-the-scenes role. (This is the same approach used by the Cooking Light Diet team, which we describe in chapter 2.)

This concierge minimum viable product (MVP) approach has become a popular way to launch systems. The Taproot team knew it would need more automation if it wanted the system to scale, but it also knew automation could come later. Launching early achieved two goals. First, it ensured that the team would have something to show to funders at the annual event. This was a hugely important marketing and sales goal. But launching early addressed an even more important goal: it allowed the team to learn what features it would actually need in order to operate the system at scale. In other words, it allowed the team to establish the sense and respond loop—the two-way conversation with the market that would guide the growth of the service.

The project planners had imagined, for example, that the skilled volunteers would need to be able to create profiles on the service. Organizations would then browse the profiles to find volunteers they liked. This turned out to be exactly wrong. When the team tried to get volunteers to make profiles, they responded with indifference. The team realized that, in order to make the system work, volunteers had to be motivated to participate; they needed to find projects that they were passionate about. In order to do this, the system needed project listings, not volunteer listings. In other words, the team had to reverse the mechanics of the system, because the initial plans were wrong.

By the second month of the project, the team had built the system with the revised mechanics and then concentrated on tuning the system: identifying the details of the business processes needed and building software to support those processes. How would the team make it easy for organizations to list their projects? How would team members make sure the listings were motivating to volunteers? How simple could they make the contact system? How simple could they make the meeting scheduler? At the end of the four-month project, the team had a system that had been up and running for three months and that far exceeded the performance goals written into the contract.

Solving the Local Knowledge Problem

Projects like this work because they follow the principles of mission command. They give teams a strategy and a set of outcomes to achieve, along with a set of constraints, and then give them the freedom to use their firsthand knowledge of the situation to solve the problem. In this case, the strategy that Taproot Foundation was pursuing was to use the power of the internet to increase the organization’s impact by a factor of 10. The strategic constraints for the project were clear as well: funders had paid for the team to create an online matching service. No matter what the team did, it would have to produce an online matching service, although it had considerable freedom to define what that service would look like. The team also had a hard constraint in the form of a date: the system had to be up and running by a date four months in the future. But again, the team had considerable freedom to decide the definition of up and running.

This approach to project leadership is not common, but we see it more frequently on startup teams and in smaller organizations. Indeed the Taproot project was delivered by a single small team working with little need to coordinate with others. Scaling this approach to multiple teams and to larger organizations is a difficult and subtle problem, one that requires careful balance between central planning and decentralized authority.

We have seen many examples in the modern era of the failure of central planning. One only need look at the failures of the Soviet bloc and late-twentieth-century communist Chinese economies to find examples of what economists call the local knowledge problem: the idea that central planners don’t have sufficient understanding of the tactical reality on the ground to make detailed plans. How much bread should go to this town? How much wheat should be allocated to this factory? What if there’s a bad crop? What if the storage facility has a fire? What if the region is a rice-eating region?

The opposite of central planning is decentralized authority. At the extreme end of this decentralized spectrum are systems like anarchy, holocracy, and even, in some eyes, agile software development.

Agile does indeed put a great deal of stock in allowing small, egalitarian teams to make decisions. At a small scale, this resembles systems like anarchy, with their radically inclusive visions. But anarchy and holocracy make claims about how their systems scale; holocracy advocates claim that you can run large organizations without traditional hierarchies. Agile has, until recently, made no such claims. This idea—that agile has mostly ignored organization-scale problems and focused on team-level problems—was captured nicely by technology consultant Dan North. In a conference talk in 2013, North described it this way.

Agile doesn’t scale. There, I said it. Actually people have been telling me that for over ten years, and I’ve just refused to believe them, but they were right. Does that mean you can’t deliver large-scale programmes using agile methods? Not at all.

But to scale you need something else, something substantively different, something the Agile Manifesto and the existing team-scale agile methods don’t even have an opinion about.2

Managing at the Program and Portfolio Levels

In the Taproot story, you saw how a single team can approach a project with agile methods. But if we truly want to create agile organizations, then we need to consider how agility applies not only at the team level but also at two additional levels above the team. First is the program level: a group of two or more teams working in coordination to achieve a shared goal. The second is the portfolio level: the collection of all the work in an organization.

In recent years, agile has moved from being a cultlike movement to being a mainstream way of working. (A recent report commissioned by Hewlett-Packard estimated that more than 90 percent of large IT organizations are either primarily using agile approaches or making significant use of them.)3 And as agile methods have become mainstream, organizations around the world are trying to find solutions to making agile scale. This is because, as North indicates, agile is essentially a “team-scale” method of working, and large organizations need a system to coordinate the work of many teams.

One of the more popular approaches to this coordination is something called a scaled agile framework, or SAFe. As implied by the acronym, SAFe provides managers with a measure of comfort. After all, a huge organization filled with self-guided teams is a scary idea for managers. It sounds a lot like anarchy.

SAFe is a way of decomposing large projects into smaller pieces, assigning those pieces to teams, and creating accountability to ensure that teams complete the work that they’ve signed up to complete. The problem with this approach is that it is essentially a “more detailed plan” approach, and it ignores the influence of uncertainty. SAFe moves teams away from a sense and respond approach and toward a central-planning approach. In effect, it reduces the agile team to a production team, giving them a fixed set of requirements and expecting a specific output to emerge from the end of the assembly line. This approach can be appropriate for high-certainty efforts, but it limits an agile team’s ability to learn from feedback as it goes forward. And again, it’s this learning from feedback that allows teams to navigate in high-uncertainty contexts.

Instead of trying to fit agile into a command and control framework, we’ve seen many organizations adopt coordination approaches that are more in line with mission command—that move away from planning with outputs and toward managing with outcomes. These approaches use different tactics to coordinate the effort of large teams, but they tend to create something we call outcome-based road maps.

Using Outcome-Based Road Maps

Outcome-based road maps take many forms. We look at a few in the coming section, but before we do, let’s consider their key elements. Outcome-based road maps work because they help create a multiteam implementation of mission command. They are a way of articulating, in a cascading manner, the key elements we need when we direct the work of teams:

  • The strategic intent (“We want to increase the organization’s impact by a factor of 10”)
  • The strategic constraints (“We will do this by creating an online matching service that must be live by X date”)
  • The definition of success (“The service will match parties at X rate”)

When implemented well, outcome-based road maps help organizations create alignment, which is critical to making mission command work.

Bottom-Up Meets Top-Down Communication

There is a critical component of mission command that goes beyond simply what you communicate. Just as important is how you communicate it. Orders must be briefed up and down the chain of command; in other words, the communication and conversation must go in both directions, and this briefing up and down is ongoing. It’s continuous. It is the process of communicating this way that creates alignment.

In researching this book, we learned about a company that was putting these methods to work as part of its annual planning process. This firm, an e-commerce startup based in the United States, is one of the more successful organization-wide practitioners of agile methods. It’s not a by-the-book agile firm. Instead, it embodies many of the ideals and practices that are at the heart of agility. Among other things, it was an early and successful adopter of what is now called continuous deployment—the idea that software is not released every few months, or even every few weeks, but instead is released continuously. (We talk about this in chapter 1. It’s the process by which Amazon is able to release software every 11.6 seconds.) Over the years, they have developed a culture around experimentation, A/B testing, and optimization.

The agile methods it implemented made it possible to take an experimental approach to releasing new software. Let’s say, for example, it wants to redesign a product page on its website. Rather than guess which page performs better, it quickly designs and builds a few versions of the page, releases them to the site, directs a small, carefully controlled set of users to each version, and then measures which page performs best.

This experiment-based approach, because it is easy to do and because it yields powerful results, has quickly became a core element of the company’s culture. It is normal for teams to work in this manner, continuously testing and optimizing their work a little at a time. But one manager told us that in the past, it became a problem: “We were focusing on quick wins, rather than program-building.” The problem seems to have been in choosing what to test and, more generally, what to work on. When the company was smaller, it was easier to align the work of teams through informal means. But as it grew, more coordination was needed.

By the end of 2015, the company had grown to more than five hundred people and was generating hundreds of millions of dollars in annual revenue. The coordination problem was becoming acute. Executives knew they needed to create more alignment and better coordinate their activities. To do so, they put in place a top-down plus bottom-up planning effort in order to create a road map for the year.

Building an Outcome-Based Road Map

The first part of the work involved senior leaders, who created a list of strategic themes for the coming year. These themes would be the top-down guidance—the coordinating ideas that would serve as the rudder for the ship. Strategy is about choices. It’s as much about what you don’t do as what you choose to do. So for this year, following a period of intense focus on one segment of customers, the executives chose to focus on a different segment, one they felt had been underserved previously. From this focus, the leaders created a number of smaller themes, including a focus on the mobile experience. (Stephen Bungay points out that a good strategy statement often “looks banal” to outsiders. The value comes from the alignment you create through the continuous process of articulating the strategy.)4

With the top-down strategic themes prepared, all the delivery teams were asked to create a list of initiatives they wanted to work on in the coming year. This was the bottom-up part of the process. The list came from the front line, the cross-functional team members who knew the most about the product, the customers, and the users. These were expert, informed opinions, deeply rooted in the situation on the ground. The teams also had to provide an estimate of the specific, measurable business outcomes they believed each initiative would create.

Next, the product leaders in the middle management layer needed to figure out how to coordinate the effort—in other words, how to create the road map for the year. The leads of the product teams came together to organize the wish list of initiatives. They grouped the initiatives in terms of which themes each project supported and then stack-ranked them in terms of the contribution they believed each initiative would have—that is, they associated each initiative with the outcome they thought the work would create and showed how those outcomes would support the strategic goals expressed by leadership. They estimated head count for each initiative and sent the results to the finance experts, who correlated these plans with some of the major financial metrics they tracked and considered how the proposed work might impact results. When this was done, the plan was sent back up to the executives for review.

The executive review of the proposed plan now faced what could be thought of as an editorial review by the executive team. The assessment? The plan was close but not ready. When the executives reviewed the plan, they realized they had missed an important feature they had promised the market, so they added that. They made a few adjustments and crossed some things off the list, and then they were ready with the road map.

This is an example of what we consider an outcome-based road map. It neatly ties the work you’re planning to the outcomes you believe the work will have, and it ties the outcomes you seek to the strategic objectives you are trying to achieve. It creates a coherent story that connects the leadership of the organization to the troops on the ground.

One manager at the company told us, “The best companies have a ‘product editor.’ They have a story. If you look at Apple, they have a story, a narrative. With this approach, we have a story. As a product lead, I love this. I have direction.”

Assessing the Cultural Impact

For this young company, this planning process was new; it replaced a different process it had used the year before, and still a different process from the year before that. So change was normal, but that doesn’t mean everyone liked it. Most of the managers appreciated the clarity as well as the flexibility to pursue their initiatives. And other product teams appreciated the clearly defined negative space: “We don’t have to work on that initiative, because it’s not on the road map.” But of course there were some hurt feelings, too. People didn’t like proposing initiatives that didn’t make the cut. Still, on balance, the process seems to have been a big step forward compared with earlier planning efforts.

Let’s review some of the elements that made this process successful and, at the same time, consider how it supports the sense and respond approach that this company embodies.

  • Strategy is expressed as intent. Rather than lay out a detailed plan, leadership set direction and asked the folks close to the customer to figure out the details.
  • Situational awareness defines tactics. The staff members have a deep knowledge of what the real-world conditions are, what they’d like to fix, and what is realistic. They were able to select the best tactics to achieve the mission.
  • Commitments are made to outcomes rather than features. By tying initiatives to outcomes rather than to features, leaders gave staff members the flexibility to pursue their missions and use their best judgment to achieve the desired outcomes.
  • A mix of bottom-up and top-down planning provides balance. Unlike previous years, in which bottom-up planning resulted in a lack of coordination—or, as we see in many organizations, top-down planning creates a lack of flexibility—this process balanced inputs to create a healthy equilibrium.

Using Outcome-Based Road Maps at GOV.UK

The bottom-up meets top-down approach has also been part of the road mapping effort at GOV.UK., the official government website of the United Kingdom. The teams working on this site have been part of a pioneering effort to rebuild the online presence of the national government. The work being done there gives us a unique opportunity to study modern digital methods because of the project’s unwavering commitment to openness.

Writing about the road mapping process, Neil Williams, a product lead at the UK’s Government Digital Service, said, “Probably the toughest challenge in road mapping on a large, multiteam product is striking the right balance between (top-down) business goals and (bottom-up) team priorities.”5

To align teams and strike this balance, the GDS uses what it calls “mission statements.” Mission statements give “broad direction and boundaries to a problem space” that each team owns. This information provides strategic direction, guidelines about the constraints teams must observe, and, at the same time, the autonomy for teams to find the best solution to the problem they’ve been given. Mission statements function similarly to the strategic goals and outcomes that you saw the e-commerce startup use in its road mapping process. By aligning around missions, the GOV.UK road map is yet another type of outcome-based road map.

The road map also has to strike a balance between making specific promises and allowing flexibility in both time commitments and features to be delivered. In other words, it has to communicate clearly what teams intend to do and, at the same time, allow for plans to change and evolve in response to learning.

The team uses a combination of tactics here. First, team members are conscientious about time promises. And second, they try to limit making hard commitments to anything except the most near- term work.6 They slice the future into three buckets. During “current” work, which looks ahead about a month, they make relatively firm commitments as to what they’ll deliver. Next comes “planned” work, which is one to three months away from being started, and work that is being considered but is not confirmed. Finally, there is a longer-term bucket called “prioritized” work. This is work that is tentative.7

You should be able to see in this approach one of the central tenets of mission command: do not command more than necessary or plan beyond circumstances you can foresee.

Author Donald Reinertsen, who studies lean methods, has this to say about planning and alignment: “The modern military does not create plans for the purpose of measuring conformance; they plan as a tool for maintaining alignment.”8 Indeed, Williams told us the goal at GOV.UK is to create “aligned autonomy.”

The importance of alignment is visible in the GOV.UK project’s commitment to openness. All of its road maps are available for internal teams, stakeholders, and the public. The team uses a variety of tools to create this visibility: a huge physical poster wall in its office, an official and active blog available on the open web, and a web-based road map hosted in a tool called Trello.9 All of this visibility is an attempt to create alignment among teams, stakeholders, and the public.

Planning with a Customer-Centered Perspective

At Westpac, the oldest bank (in fact the oldest company) in Australia, the customer experience team has been applying something called customer journey mapping to create alignment for multiteam initiatives and programs.

A customer journey map is a big chart—at Westpac they are usually wall-sized charts composed of many sheets of paper—that shows the end-to-end journey customers complete as they interact with a business. For example, what’s the process for getting a credit card? Or of taking out a home loan? A business process like this is complicated and requires many teams to contribute. It touches many bank systems, from web and mobile apps to in-house systems used in the branches, in the call centers, and in the back office. The people creating and using these systems need to be working with some alignment if they hope to deliver good service to customers. The journey map is the focus of a larger “program design wall” that is intended to create alignment across all of these teams.

The customer experience team actually creates two journey maps for each process and puts them on the wall together. The first, what the team calls the As Is (or current state) map, shows the current process the customer navigates, warts and all. In fact, calling out pain points, bottlenecks, and inefficiencies is the entire point of the As Is map.

Then the team works with stakeholders to create a second version of the map—this one is a vision of the future. It’s a vision that will be better for the bank and for customers, because it eliminates the obstacles they currently face and delivers more value. For example, the team recently completed work on a vision to improve how customers are issued credit cards: the As Is version showed a process that takes five days or more for customers to complete. In the vision of the future it takes five minutes. The benefit to customers is obvious, but the benefit to the bank also is important. A better process will yield more customers and will get them active and using the credit card sooner.

Avoiding Feature-Based Road Maps

The Westpac team found that it took it a while to get the vision maps right. Team members had to create a compelling story (and thus generate excitement and alignment) but avoid being too detailed. They didn’t want to lock in too soon on features that might not work. In other words, they wanted to preserve each team’s freedom of action—the team’s ability to own, or at least participate in, creating the right solution. Ian Muir, head of customer experience, told us, “The key is finding the right balance when you’re telling the story of the future and you don’t have all the information you want.”

With the two journey maps on the design wall, next comes the planning phase. At this point, it’s about getting the delivery teams into the room to review the map themselves. The teams have already been part of the process of creating the maps, but now the next step is to figure out how to deliver the experience they’ve helped imagine. This step is when self-direction and multiteam coordination meet. Dan Smith, a customer experience manager, says, “I tell them, ‘It’s not my vision here. What do you think we should be doing to make this better?’”

At this point team managers often find something on the map that makes sense for their team to work on, and the team takes ownership of that piece. For example, one major hurdle that stood out on the credit-card acquisition maps was that customers needed to actually go to a bank branch to prove their identity. Working at the map, Westpac was able to create a mobile-phone-based proof of identity that allows customers to avoid a trip to a branch. This seemingly simple feature took a lot of work from multiple teams and departments at the bank.

Smith told us that it’s not so much the customer journey maps, the design walls, or really any single road map artifact that creates value on its own. Rather, those artifacts serve as a backdrop for the most important part of the process: the collaborative gatherings at the design walls. By holding meetings and discussions at the walls, which are rich in research and in artifacts that the team has produced together, the wall gathering yields far greater value for all team members than the standard practice of making decisions in a meeting room after reviewing a presentation, or by debating decisions in a string of twenty emails over days. In this way, the design walls serve as the context in which teams can work together to align themselves around the same goals.

Addressing Experience Debt: Iteration Versus Increment

By building road maps around customer-centered journey maps, the Westpac team uses customer experience as a key dimension around which they align. This is a little bit different from the other examples we’ve shared in this chapter. Those teams used organizing principles that are more obviously based on business outcomes. There are advantages to organizing around customer experience, though, especially if it tracks to the strategic goal. At Westpac, this is indeed one of the strategic goals.

It can be hard to create a great customer experience in an agile context. One of the frustrations we see in some organizations that work in an agile way is something called experience debt, or design debt. Similar to technical debt (an engineering term for the engineering housekeeping-type work that is useful but never gets prioritized and thus builds up over time), experience debt comes from the small design problems that build up over time and reduce the quality of the user experience: a confusing instruction here, an extra pop-up window there.

Designers want to be able to go back and improve things by iterating: working on the feature again. But project owners, whose performance is sometimes measured by how many features they release, may feel pressure to move forward to the next feature. Muir points out that neither choice is inherently right or wrong. “It’s a value trade-off,” he told us. Muir says that the journey maps help make this choice visible. Teams can look at the map, see the work that remains to be done, see which customer pain point needs to be addressed (or even addressed again), and make choices in the context of the vision of the future.

Muir told us that the customer journey maps help address what he calls “the translation gap.” This is the gap between what leaders want to do for customers and what they actually do. The gap isn’t intentional, Muir says. “I’ve never talked to anyone in the organization who says, ‘I want to give the customers a hard time.’” Instead, the gap comes from a lack of clarity and shared values. Stephen Bungay calls this phenomenon “the alignment gap,” which he defines as “the difference between what we want people to do and what they actually do.”10 Muir notes that it helps when leadership is talking about being customer-centered, because people hear that message and can use it to guide their actions.

In other words, alignment starts with leaders who are clear about values, but creating alignment takes a lot of work. And it requires planning tools that can reinforce the values that leaders seek. This is why a good road map must create a link between the work that is being done by staff, the outcomes created by that work, and the way those outcomes will help the organization achieve its strategic goals.

Gaining the Value of the Human-Centered Perspective

Part of the power of the customer journey map is that it aligns the work of multiple teams around a single vision. It helps that it expresses the vision in terms of the customer, because this point of view cuts across the organization. It cuts across roles, departments, channels, and so on. It allows the organization to step outside itself and consider how the various pieces of the system fit together.

Leisa Reichelt, head of service design and user research at the Australian government’s Digital Transformation Office, told us that in her work in government, it’s common for this kind of user-centered planning to cut across many parts of the government. “When you have three levels of government [federal, state, and local] and many agencies, you can see programs that easily cut across twenty, thirty, or forty departments,” she told us. The resulting coordination problem may seem overwhelming, but the potential to create valuable services makes this point of view important. It’s the difference, she told us, between “giving someone a concession [health care discount] card when they turn sixty-five, versus helping someone transition to a new phase of life.”

Planning Your Portfolio

The next level above program management is portfolio management. Large organizations must find ways to think about their investment across their entire product portfolio. How does the sense and respond approach apply in this context?

Let’s start with an uncontroversial observation: there is no one-size-fits-all method for product success. Every project is different, every team is different, and an approach that works for one project is probably not appropriate for the next project. That said, there are patterns we can use to identify which projects benefit from which approaches, and we can use those patterns to identify the right approach.

Managing Uncertainty and the Product Life Cycle

As we’ve said, the major reason to adopt a sense and respond approach is to deal with uncertainty. When you think about the product portfolio, uncertainties tend to fall into three major categories:

  • Customer-related. Is there a need? Does our solution satisfy the need?
  • Sustainability-related. Is the business sustainable? Is the market large enough? Can we create technology and infrastructure and operations that allow us to deliver the service profitably?
  • Growth-related. How can we grow the business?

Furthermore, these categories map more or less directly to the life cycle of a business. Some companies use this mapping explicitly. For example, at Intuit, a Silicon Valley-based maker of financial software and services, teams overlay these questions onto McKinsey & Company’s well-known three horizon model and use this model for portfolio management.

The three horizon model is a framework for managing growth. The model, formulated by Mehrdad Baghai, Stephen Coley, and David White in The Alchemy of Growth, asserts that growth should be understood in terms of three time horizons: near term, midterm, and long term.11 Horizon 1 is the near-term growth opportunity. Your core business lives here, and you should be managing it for growth and efficiency. Horizon 2 is where your emerging businesses live. These businesses must be nurtured. Some of them will become your core businesses in the midterm future. Finally, Horizon 3 is the long-term future. This is where new options are identified, evaluated, and either killed or promoted. The authors note that at any given time, your portfolio must contain projects in all three horizons.

At Intuit, teams map the three types of uncertainties onto the three horizon model.

  • For horizon 3, they ask customer-related questions. Is there a need? Can we create a solution that satisfies the need? Will people buy our solution, and, most important, do people love it?
  • For horizon 2, ideas that make it out of horizon 3 are evaluated in terms of their business viability: Can we make enough money on this to make it a worthwhile business? Can we make it efficient and repeatable? Can we find evidence that we can grow it large enough to make it worth our investment?
  • For horizon 1—businesses that have customer love and are viable—the focus is on growth and efficiency. What can we do to make this business larger and more profitable?

Companies that are disciplined about product portfolio management recognize the value of balancing the portfolio in terms of risk, strategic fit, and stage in the product life cycle. Intuit balances its investment across the product life cycle by allocating budget across the three horizons, with 10 percent going to horizon 3, 30 percent going to horizon 2, and 60 percent going to horizon 1.12

Setting Financial Targets across the Portfolio

This final point from Intuit is critical to enabling good planning across the portfolio, because it points to the importance of managing your budget toward different goals and, by inference, with different yardsticks. About this process, former Intuit vice president Hugh Molotsi wrote, “A common mistake companies make is measuring the progress of all their offerings using standard business metrics—like revenue, profit and customer acquisition—no matter what stage those offerings are in. Horizon planning helps us avoid that mistake by providing guidance on what our expectations should be from offerings at each stage in their maturity.”13

Intuit uses different targets for projects in each section of the portfolio. The current core businesses are measured much as we might expect, in terms of revenue, profitability, growth, and efficiency. For horizon 2, the emerging businesses, things are different. These business are trying to establish a foothold, so winning market share and demonstrating rapid growth rates matter more than profitability. Finally, for horizon 3 offerings, real financial results are set aside entirely. Those teams need to create a credible hypothesis about their business model, but they don’t need to prove it. Instead, they are asked to prove what the company calls “customer love,” meaning that they are looking for a problem and solution that the market wants.

Using Sense and Respond in Portfolio Management

All this serves as background to this question: How do we understand the sense and respond approach in the context of portfolio planning? The key is in understanding how you can apply sense and respond to each of the different types of uncertainties. Early-stage validation work, the horizon 3 work that seeks to identify customer love, is working at the largest scale of uncertainty. It is typical for almost every dimension of a business to be uncertain here: Who are the customers? What problem are they trying to solve? What solution can we create? How will we make money? This is classic startup territory.

In horizon 2, the level of resolution becomes one step finer. Presumably, you’ve found a business that works for a small set of customers. Here is where you’re looking to prove you can be a profitable business, so your experiments are firmly about getting the business model right and moving toward profitability. But businesses in this stage of development are at their most vulnerable. They have left the protection of the R&D group or innovation lab, where many horizon 3 ideas are born and incubated. As organizational theorist and Crossing the Chasm author Geoffrey Moore says, these initiatives are “adolescents.”14 But too often they’re held to the same operational metrics as the core businesses we find in horizon 1. This is a mistake, Moore argues. These businesses need to focus on finding their feet before they can deliver the profits we expect from our core businesses. The questions in this section tend to be about how you unlock growth, and the experiments should focus on the tactics you think will unlock growth. How can we get more customers for this business? What is stopping adoption? Are we serving the right use cases, or do we need to add or adjust? Are we serving the right market segment? Do we need to add adjacent segments?

Finally, in horizon 1, our core businesses, we can still use sense and respond tactics, but the resolution becomes finer still: it’s about growing engagement and delivering and receiving more value from each customer. What features will we add? What costs can we cut without impacting quality?

Here’s the bottom line: when you’re figuring out how to assign work to teams and how to measure their performance, you have to remember that there is no single set of measures by which team performance can be assessed. Rather, leaders must consider where an initiative is in the life cycle and what uncertainties teams face. Only then can planners create appropriate missions for teams.

Sense and Respond Takeaways for Managers

  • image Changing the way you plan and assign work is one of the most important parts of a sense and respond approach.
  • image Leaders create the conditions in which sense and respond teams operate. Teams can try to “be agile” as much as they like, but if their direction is not constructed correctly—if their freedom to act isn’t preserved, their goals are not defined correctly, and their constraints are not clearly understood—then there is little they will be able to do.
  • image Uncertainty changes the way you plan. Plans must be oriented toward the results teams are attempting to achieve.
  • image This new kind of planning has an impact at the team level; at the program level, where new methods of cross-team coordination must be employed; and at the portfolio level, where it is important to differentiate the different types of outcomes that teams seek across the product life cycle.
  • image Use the principle of mission command to direct teams. This means asking teams to achieve an outcome rather than to create a specific output.
  • image Coordinate the activity of multiple teams (programs) by using outcome-based road maps.
  • image Alignment around strategy becomes more important than ever and must be created by a robust communication process that combines top-down strategy, bottom-up insight, and two-way communication.
  • image In portfolio planning, you can use the idea of outcomes to create targets across the portfolio that are appropriate for different stages in the product life cycle.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.213.128