© Michael Nir 2018
Michael NirThe Pragmatist's Guide to Corporate Lean Strategyhttps://doi.org/10.1007/978-1-4842-3537-9_9

9. First 12 Months

Executing, Mentoring, and Reviewing
Michael Nir1 
(1)
Brookline, Massachusetts, USA
 

It has been 12 months since you started. What should your daily schedule look like? What should you emphasize?

The first year flew by. As you’ll read in section three, the first 12 months of any transformation are intense and rewarding, filled with success and failure. Table 9-1 details the expected percentage of time allocated around the yearly anniversary of the transformation.
Table 9-1.

Expected Percentage of Time Allocated–First 12 Months

Activity

Time Spent

Comments

Define vision

10%

Revisit the vision

Identify customer personas

15%

Ongoing

Prototype MVP and program training

15%

From MVP to MMR and ongoing community of practice (CoP) coaching

Run experiments

40%

Experiment locally, collect data quickly, and decide fast

Pivot or persevere

20%

Continue supporting the program

Revisit Vision

A year since you started with the lean agile transformation is the appropriate occasion to revisit the vision with the executive team, possibly at the yearly offsite. I often facilitate the discussion as a retrospective, using the postcard from the future that we constructed the previous year as a reference point. Figure 7-1 from Chapter 7 presented a compelling vision for my client; after a year they were ready to review their vision, update, and adapt it. They answered questions such as
  • Why are we on this journey?

  • What have we learned as an organization?

  • What are our main lessons from the last year?

  • What could we have done differently?

  • Is our original vision still relevant? Do we need to change it? Do we need to update it?

  • What do we want to start doing to support the updated vision?

  • What would our new postcard from the future look like?

I facilitate these sessions with the leadership team as well as in town halls to get the necessary buy-in to proceed with the program.

Stakeholder management, which was a prevalent activity in the first 90 days, is still a priority, however of less intensity. The critical mass of support is achieved within the first six to nine months; the fence sitters are convinced by this time and join the supporter stakeholder team. This is a result of the ongoing success of the program.

What do you do if you don’t have the support you need by the end of the first year?

Sometimes, the lean agile program is grassroots and locally grown, and it’s difficult to take it to the next level in the enterprise. This isn’t the intention when you roll it out since it is always the best to receive the support from the top and have the executive buy-in needed when starting the program; however, you can’t always have it. This is normally the case when there is local sponsorship but a lack of interest in the enterprise to support it. By the end of the 12 months, you have an indication of whether you can receive the necessary funding to roll out the program across a bigger part of the organization. I suggest building the case for this expansion by evangelizing the successes achieved on the local level to executives in other parts of the business. Many times people have their heads down, creating great results in one part of the business, and assume that their success is widely known. More often than not, this isn’t the case in big enterprises. The great results affect a business unit of 500 employees and could be limited to a geographical location. The employees creating these results are too busy working miracles and thus the task of communicating the evolving vision is left to you, the lean agile transformation expert. I observe it as an extended part of stakeholder management; it isn’t the traditional limited approach that you’ll read about. It is up to you to take the results and popularize them across the enterprise to find those interested in supporting you.

Anti-Pattern

Bad smell: Actually, many times I’ve observed various transformations occurring at an enterprise of 15,000 employees; all have their merits but they aren’t a holistic approach.

I’ve been in these situations several times; we were able to create powerful results both lean and agile. I was convinced that others in the bigger enterprise were aware of our success, but that wasn’t the case. When I met by chance an executive at a trade show and we had lunch together, I was surprised that she knew little of our achievements. Actually she was looking for a similar program and didn’t know who to turn to. I’ve learned since not to take it for granted, in bigger enterprises, when the lean agile transformation is limited to a business unit. By the time we have our first wins, we make sure we communicate them to the bigger organization. Often we’ll run into others who are interested in doing the same, and we’ll happily support them in their journey. Other times we’ll meet more opposition and we’ll figure out how to lead through it. Most likely that other business unit has an invested interest in another approach or method, and is reluctant to invest in another, or maybe they tried some of these ideas and they didn’t quite work for them. That’s often the case with failed lean agile efforts: trying a limited subset with little support, and when failing, declaring the system faulty rather than the lopsided implementation. In these cases, I suggest looking beyond what people are saying to what they are doing. Looking for the commonalities and finding possible areas of mutual interest is a reliable method to grow the lean agile approach outside the original area of implementation.

Tip

When you are in position to affect approaches being driven in various parts of the enterprise, ALWAYS evangelize the similarities and identify a single consistent model, as I described several times in this book.

As I describe in section three, by the end of the first year, the program succeeds in creating the critical mass in supporting the new way of work. However, where did the employees find the bandwidth to complete the initial lean pilots that occured throughout the first 12 months? When I initially introduced the model described in Chapter 7 at a client of mine, we asked ourselves who would be the people to drive the change, the new way of work. We found two answers, both presented in books.

John Kotter is a thought leader in the fields of business, leadership, and change. His 1996 best-selling book on leading change in organizations is widely referenced and still holds true in 2018, specifically the eight steps to a successful transformation. In the enterprises where we didn’t have allocated resources to the program, XLR8,1 Kotter’s follow-on work from 2014 describes an approach we used to model our lean agile transformation in the first 12 months. In XLR8, Kotter argues that established legacy organizational structures do not provide the agility to respond quickly to narrow windows of opportunity. Kotter suggests the organizations augment their hierarchical top-down structure with a loose startup like a distributed network built from individuals representing various domains in the enterprise. In essence, he suggests a dual-operating system with the hierarchical top-down legacy structure providing day-to-day management and the network providing necessary responses to the opportunities. Kotter is convinced that this network existed in most enterprises at a much earlier stage of their evolution. This belief is shared by Eric Ries and is described in The Startup Way.

Kotter’s vision inspired ours both in application and strategy. It provided the visionary support we needed from a prominent figure in change leadership that is not part of the lean agile movement. It was an outside-in affirmation to business agility thinking that we implemented. It also proposed and validated the mechanism to successfully transform an organization that was at capacity in terms of resource allocation.

Another inspiration for the vision was found in The 4 Disciplines of Execution.2 The book offers an additional outside-in view on creating change in resource-constrained organizations. It describes the endless daily tasks that employees handle as the whirlwind; rather than battle it, they suggest accepting it as a necessary evil in maintaining the current organization. The whirlwind is the massive amount of energy that’s necessary just to keep the operation going on a day-to-day basis. At the same time, it provides a framework to implement WIGS (widely important goals). The 4 disciplines aren’t designed for managing your whirlwind but for executing your most critical strategy in the midst of your whirlwind. We used the 4 disciplines to lead the lean agile change in several organizations that had no agile background. They are similar in thought and are more relevant in non-software environments.

Best Practice

The Four Disciplines of Execution and Kotter’s XLR8 provide inspiration in organizations where agility isn’t widespread and where business rather than IT or software is prominent.

Identify Customer Personas

I haven’t discussed the broader concepts of user experience (UX) and user experience strategy in this book, while I referred to it occasionally. Although not part of persona identification, various Lean UX concepts have been addressed over the 12 months. Jamie Levy3 describes how to devise innovative digital products that people want. Since many of the steps to validate hypothesis involve interaction with internal and external customers, I recommend reading it to better understand how to define and validate the target users through provisional personas and customer discovery techniques, focus the team by running structured experiments using prototypes, and increase customer engagement by mapping desired user actions to meaningful metrics.

Organizations with low UX maturity require support in maturing their user experience strategy and implementation. Often, organizations do employ user experience experts; however, they are isolated from the main delivery effort and produce insights that are hard to incorporate into the products and features. What you’ll find in the first 12 months is that user experience as a concept has to become second nature to everyone’s’ thinking; otherwise you’ll end up with a microwave with too many buttons, a back office system that is annoying to use, a college curriculum that is outdated, or two pipes of one line that are supposed to meet yet are 2 feet apart when the welder tries to connect them.

Best Practice

Make user experience common knowledge, and facilitate user-centric design workshops as part of the training curriculum.

Each pilot team will need user experience know-how early on. Make sure you’re able to provide it to enable validation of hypothesis. Whether the organization is immature or mature, at approximately six months into the program and no later than 12 months, you’ll need to unify the team level efforts. Revisit the personas that were identified. Most likely, if the teams had enough freedom to experiment, there are multiple personas in place; without an overarching strategy, many of them will be redundant. As long as the personas are created organically within teams that then validate assumptions about them, rather than centrally driven by an innovation lab and thorough research, you are on the right path. You have to strike a balance between the separate personas that the teams identified and the need for a cohesive approach for persona creation and validation. Make sure you communicate the personas in place and cross-share the understanding behind the various personas and their goals. I found it useful to aggregate personas into an arch type and made the distinction between the children and the parent arch type persona. Yet again, this is a balance, a tradeoff between letting each team ideate and explore, and a more robust centric approach. I prefer to err on giving more freedom to the team. Yes, you can and probably should design and provide the following to the teams as optional resources:4
  • Unified surveys reports

  • Survey templates

  • Interview scripts

  • Sample questions for stakeholders

  • Analysis report formats

  • Customer profiles

  • Recruitment participants criteria check lists

  • Recruitment screening guidelines

  • Design templates

  • Facilitator/observer note-taking sheets

Be wary, though, of overburdening the teams with them and be open to experimentation with the resources you provide the team members. Being lean agile is also being open to adapt the process and change it as needed.

Best Practice

Business agility is being able to question the process itself rather than following blindly the prescriptive processes and procedures that were created by our predecessors.

Prototype MVP

There is often confusion when discussing the term minimal valuable product (MVP). When the teams start out, they often refer to MVP in its true sense; MVP constitutes a subset of a new feature or product which the team believes they can validate with the customer, internal or external. It is used to cheaply experiment and answer a hypothesis. The goal is validated learning. The team uses it to figure out what the customer is interested in. The MVP might not be actualized using code or software. Through a series of MVP experiments the team might build an minimal marketable feature (MMF), which is the smallest set of features or subset of a product that is delivered. It has both value to the organization and the customer, internal or external. The series of MVP experiments might never culminate in an MMF as the team might pivot completely from the idea rather than persevere.

Following several releases of MMFs–or in other words, as the product is incrementally deployed–a major deployment might be considered a release, which is often termed a minimal marketable release (MMR). The goal of the team is to identify the smallest set of incrementally delivered features between releases that have value to the customer. Sometimes we refer to the term minimal marketable product (MMP), which is used to distinguish between an MMR that is targeted at a broader audience and an MMP that is aimed at initial users, typically innovators and early adopters. The team can use the MMP as a tool to reduce time to market by offering a limited subset of features.

Tip

Make sure people distinguish between an MVP and a MMF; never release an MVP to a broad audience.

When the teams form and work on the 90-day demo, they are invested in validating assumptions towards an MVP. At the executive demo that follows the 90 days, the presentation exhibits either failed experiments of MVPs or successful experiments that support an MVP; both provide validated learning! During the demonstration, the executive team discusses the merits of the successful validated MVPs. They decide together with the team what the next steps are. Often the team proceeds to validate the next assumption towards another MVP , and incrementally creates the MMF. Since the transition is incremental, it is tricky to define exactly when an MVP becomes an MMF and which MMF is actually an MMR, which is the reason they are often used interchangeably. However, confusing the names is detrimental since it results in too big MVPs, which leads to upfront development efforts that are too big and wasteful.

Anti-Pattern

Bad smell: By calling everything an MVP, organizations are missing the point. They are investing too much upfront effort in validating big chunks rather than focusing on small minimal assumptions and building the product incrementally.

Expect the teams in training sessions to fall into this trap. They think of a big initiative and then quickly jump to the solution. They fall in love with the solution rather than the problem. Thinking of the solution will lead them to identify a big MVP that requires a major effort and they then immediately claim that the project is irrelevant since they will never receive the necessary funding and resources to run the experiment. It is a vicious cycle. Initial MVPs are small–small enough that a team of five can validate quickly–in no more than 30 days. Subsequent MVPs might require limited code and take longer to validate. If you start with a big MVP, you are wasting time and effort.

Best Practice

As a coach, ask teams members how they can validate the MVP without any software code written and no automation provided!

One way to avoid the big MVP mistake by the team is ask them to identify the MVP by starting with a clean slate. As mentioned in Chapter 7, use user story mapping to identify the full breadth of a process from the perspective of a customer. Following Jeff Patton’s User Story Mapping,5 the team creates the user story map and then identifies a goal for a persona, a possible MVP. The MVP is a slice used to identify a small version of the feature or product. Team members identify the goal of the slice in a sticky note or card to the left of the story map. The next step most teams follow is removing steps from the process map to identify the smallest number of tasks that allow the specific persona to reach its goal. However, the net effect is that too many process steps remain on the map. In other words, removing process steps leads to more work to be completed compared with starting from a clean slate. I ask team members to first remove all the process steps from the map, and only then add back the steps that are absolutely necessary for the MVP slice. By starting from a clean slate a much smaller MVP emerges.

Tip

When we are tasked with remove elements, psychological loss aversion kicks in and we find it difficult to let go; when we start with a clean slate, the discussion is free from the loss aversion bias.

You will also find that those trained in project management who are familiar with the Work Breakdown Structure concept and tool find it hard to embrace MVP concepts. They treat the customer journey as a project and break down the project into elements, thus creating big chunks of work. Instead, explain that user story maps are built grassroots without knowing the full scope of the projects, since the project and the scope change as they creating MVPs and validate assumptions to support them or pivot from them.

I ran into the above MVP challenge with a financial services organization. They created so-called MVPs that were too large for the team to validate in a short timeframe. The MVPs were broken down from a traditional marketing requirement document. Each set of requirements was defined as an MVP and was identified as a fact rather than an assumption. This is a recurring pattern. Traditional scope documents do identify assumptions; however, they never receive any validation treatment. The MVPs this organization created were just milestone deliveries in a long process of creating feature upon feature, which resulted in wasteful development and had little to do with the essence of an MVP.

Anti-Pattern

Bad smell: Most marketing and business requirement documents have assumptions listed on page 3 or 4 of the 120-page document. These assumptions are listed in the document and then treated as facts for the reminder of the project.

For each 10 teams trained you can realistically expect
  • In the first three months, half the teams will validate an MVP that will catch the attention of the leadership team and receive funds to proceed with more validation.

  • Half of those will be able to build on that MVP and create substantial impact through an MMR.

  • Other teams will regroup and continue experimenting with more ideas.

  • The lean agile engine will impact individuals across the organization, who will want to be part of the in group and test more assumptions and validate MVPs.

  • Ongoing quarterly demonstrations of potential new features and products, with either pivot or persevere decisions based on validated learning.

How do you coach and sustain these results? Following the training session, each team receives weekly or biweekly coaching supporting it in validating assumptions. At some point, you’ll have to train trainers and coach coaches to provide a bigger reach of the program and answer the incoming requests for training and coaching. The Chapter 10 case study illustrates well the repercussions of a limited bandwidth.

I implemented communities of practice to increase the reach of the program. According to Cultivating Communities of Practice by Wenger, McDermott, and Snyder,6 a community of practice is a group of people who share a concern or a passion for something they do, and learn how to do it better as they interact regularly. This definition reflects the fundamentally social nature of human learning. In all cases, the key elements are
  • The domain: Members are brought together by a learning need they share (whether this shared learning need is explicit or not and whether learning is the motivation for their coming together or a by-product of it).

  • The community: Their collective learning becomes a bond among them over time (experienced in various ways and thus not a source of homogeneity).

  • The practice: Their interactions produce resources that affect their practice (whether they engage in actual practice together or separately).

The communities revolved around the various domains we required to operate the lean agile teams effectively. Most important was the lean agile coaching community and the user experience validation community. We jump-started the process and then had individuals responsible for coordinating and facilitating the meetings, recording lessons learned, and supporting the community. The communities were open to all. We encouraged aspiring coaches to attend the coaches community of practice as much as we encouraged others to join the communities that were of most interest to them. We reviewed practices that the community identified and spread them across other communities to be used as best practices. We knew we were successful when the process was self-sustained after the first 12 months of the program.

Best Practice

Communities of practice done well are a great tool to share and spread knowledge in a distributed environment.

Run Experiments with Customers to Validate Your Hypotheses

I found myself drawn to experimenting with customers in the lean agile implementations I led, so I guess I have a skewed perception as to the amount of time you’ll find yourself investing in this activity. That said, experimenting with customers to validate the hypotheses is a challenging, fun, and illuminating activity that clients find intriguing and foreign to their day jobs. Traditionally this is a combined skillset activity: the experimenting is handled by user experience experts and experienced interviewers while the data analysis is the domain of data experts and statisticians. Setting up A/B tests, defining population and sample size, and inferring the results are complex skills. However, if you truly want a lean agile assumption validation mindset that guides decision making, you must popularize these skills. As a coach I spent many hours mentoring and coaching teams and individuals to break two recurring mental blocks: the first was that they were allowed to interact with customers and the second that basic statistics are not complex; to date I am not sure which block I found harder to face.

Instead of explaining why experimenting with customers to validate the hypotheses is a challenging, fun, and illuminating activity, I provide real world examples below. They are what you can expect throughout the first 12 months.

Tip

Experimenting with customers to validate hypotheses requires knowledge in user experience and statistics. Coach your teams and individuals to be comfortable with these skills.

One of my engagements with a corporate lean strategy engagement was at a health insurance organization. The executive training was completed and I was spending time with the vice president of operations to discuss the first 30 days of the engagement. We were examining the five executive teams’ projects when a director of IT dropped by and asked for five minutes of her time. They invited me to stay in the room while he presented his problem. Three months prior, they concluded their Net Promoter Score survey, sent to 2 million customers. They received back 215,300 completed surveys, and out of them 13,128 surveys also had free text responses. They wanted to examine the free text responses to learn from their customers; however, they preferred to use software to analyze the free text rather than read each response separately, which seemed to make sense. They had an off-the-shelf software application to analyze the free text results and they fed it with a sample of the free text survey results. The software output was not valuable, or as he put it, it was garbage. It seemed that the off-the-shelf software wasn’t targeted to this type of text analysis. In order to customize the software and teach it the specific tagging relevant for the survey data, they would have to invest several months of engineering effort. The IT director explained that they prepared a project plan for the necessary customization and they would like to present it to the change review board. He added that this important project would have to supersede existing high priority efforts that were currently in flight. He promised that once he got the approval, he would be able to provide an analysis of the free text responses in three months.

When the IT director left the room, I asked the VP to tell me more about the NPS survey and data. She gave me background information about the results and the importance of the program.

I asked her why they wanted to analyze the results. She answered, “To take action and resolve the most important recurring issues that arise from the written feedback analysis.”

I asked her for the actual data.

Luckily she had a copy of it on a spreadsheet; all 13,128 surveys that had free text responses.

I asked her how many free text survey responses she and her fellow executives would need to read to identify the most important recurring issues that arose from the written feedback analysis.

She answered, “All of them.”

I asked, “So you’re willing to spend IT resources and three months’ time to fully analyze all surveys, then convene the leadership team, and spend time reviewing the analysis, and then four months hence decide on an action plan to respond to the synthesized analysis, and maybe in a year you’ll address the information that you have right here in front of you.”

To which she said, “Well, we need to analyze them…”

She was right, of course. However, how many did she truly need to analyze?

“How about five?” I asked her. She was confused for a minute and then she responded incredulously, “What do you mean, just five?”

I asked her to pick five responses in random and review the free text field.

We learned that people think their premiums are too high and that costumer service is annoying. “No surprise there,” she muttered.

I suggested we pick another survey randomly and review it. She obliged; another customer unhappy with the service.

“Let’s select a seventh survey.” She did. Another customer that thinks they are paying too much for the insurance. “No surprise there,” she said. “I don’t need the survey to know that,” she added.

Best Practice

When you know nothing about a certain experience or event, a single data point will increase your knowledge about it infinitely from zero to one; the next data point will increase your knowledge 100%, from one to two. The third will increase it by 50%, from two to three data points. The fourth by 33% and so on… It’s the law of diminishing returns that rules. Be careful. How much is enough and how much are you willing to pay for an additional data point? Make your MVPs small to limit failure, experiment locally, collect data quickly, and decide fast.

By the 20th random survey, a pattern emerged and a quick Pareto analysis told the story: 80% of the responses that were attributed to a single cause, premiums that were too high.

In 30 minutes, we reviewed 30 more surveys, reaching 50 surveys to validate our finding and found the same patterns. I asked, “What would the value be of reviewing 100 more, 1,000 more, or all of them?”

Needless to say, the urgent IT project to customize the analysis software was scrapped.

Tip

You often need a smaller sample than you think you need.

The prevalent mindset in organizations is: we need to be careful since the investment is big, and the impact is big, and we can’t be wrong; we mustn’t fail, therefore we need more data to decide, and then we’ll ask for more data just to be sure.

Lean agile flips this mindset. Let’s run a small experiment where the stakes are small, the investment is small, the impact is small, and if we fail, we learn; therefore we don’t need too much data to decide.

Tip

When dealing with health and safety issues, you want very high levels of confidence before deciding whether a process, feature, or product is safe. In these cases, the sample size out of the population must be much higher than anything I mentioned in this chapter.

Allow me to repeat this important point: the challenge we have as lean agile coaches is that we are fighting ingrained beliefs about data and decision making; we’re up against decades of faulty organizational perception that more is better and more analysis will always get you better results.

As lean agile coaches, we know better. We realize that small iterative delivery with real customer feedback provides better and faster results. In other words, if it takes an organization 13 months to release a feature while its competitor is able to figure out the first MVP in 2 months and then iterate on the MVP, releasing an MMF 2 months later and continuing to iterate until they have a considerable market share in 7 months, guess who will dominate the market. If time is of the essence, quick small releases that are validated with customers are a much better approach to creating value quickly.

“Incremental product delivery will never work in our highly regulated industry,” is what I heard from the vice president of product in an insurance company. He continued by explaining that they have to file each and every new product in every single state and even the smallest change in the product has to be reapproved by the regulator.

“Thus, it makes much more sense to file for every possible product variant upfront since it takes six month on average to receive the approval of the regulator in the state.” He added. “I can’t validate with a customer and then add a feature based on the feedback and ask to tweak the product accordingly. I must have everything up front!”

“That’s valid,” I answered.

“Then your system doesn’t apply to our products and our industry.”

“Really?”

I asked him how he offers the product that was approved by the regulator to the customer.

“Well, since it is approved, we develop the back-end functionally and give the customer plenty to choose from,” he said. “They love it. I am sure of it.”

“Well, do they?”

I offered a small experiment. “Let’s ask our customers what they think of the options.”

“Great. What kind of a coach are you?” the vice president exclaimed. “Exactly what I need now: another survey, more development work, more resources to complete it…”

I said that I would run the experiment with 10 consumers that selected a product online in a certain timeframe, others that spent longer, and another group that abandoned the online product selection system half-way through the selection process.

He wasn’t all too happy about it but played along. Before we concluded our meeting, he mumbled, “I don’t know what you expect to learn from 10 consumers.”

Well, you might guess how the story unfolds. Turns out that the consumers didn’t want to be exposed to all the options that were approved by the regulator, they found scrolling through 11 pages of various options confusing, and they really wished they could select from a small number of options. This is known as the paradox of choice.7

On top of which, IT software engineers spent an inordinate amount of time and money building the functionality on the back-end system. Wouldn’t it make more sense figuring out which of the plans get the most interest and offer them?

He wasn’t convinced. “You’re just trying to save IT spending. What you validated is of little consequence.” He kept offering the customer all 121 product variants presented in 11 web pages with 11 products per page.

You might be quietly criticizing this VP of product development. However, this is a recurring pattern. Enterprises offer too many options to a too-big demographic and resist proven best practices that explicitly instruct them otherwise.

That’s why I find validating with customer so much fun. You get to go in the trenches and break entrenched beliefs, although not every engagement is successful, as this VP of product exemplifies.

Tip

Now wait a minute. Before you start judging this organization critically, would your organization do otherwise? Would it trust statistics and avoid looking at all the data points in order to act fast on the feedback from the consumer?

Pivot or Persevere in a Build-Measure-Learn Cycle

During the first 12 months you will support teams in moving away from their original assumptions and pivot. As mentioned, pivoting is hard. It is an admission of implicit or explicit failure. The more invested a team is in a certain product, feature, or project, the more difficult it is to pivot from it. Your role in the first 12 months is to stand guard and urge teams to call a hypothesis or an MVP invalid and move on. Initially, we used the rule of three: a team could persevere three times with their original idea before pivoting. Often the team would be unwilling to throw out an idea that they brought up and they might question the validation plan, the specific experiment, the sample size, and the participants. We wanted teams to feel ownership of the process and allowed them to persevere and conduct more testing; therefore we allowed three rounds. We used it for instructional purposes, although in hindsight it might be too lenient. We then became ruthless, and although we got pushback from teams, I think it was for the best.

Best Practice

Pivot ruthlessly. If the validation plan is intact and the hypothesis is invalidated, PIVOT.

In another case, the team was successful with the validation of the initial hypothesis but it was stuck on how to persevere. In other words, they were able to validate their most important and most uncertain assumption, and therefore persevered. However, they were at a loss on how to persevere: what was the next assumption to turn into a hypothesis and validate?

I found this to be another recurring pattern. People have an amazing idea, they think of one BIG assumption, and they are able to construct a hypothesis and validate it without any code developed, using a simple prototype, but then they get stuck. They are convinced that they are all done and their idea is ready to be developed completely. It’s like they crossed the bridge and they have been vindicated. “Aha!!” They say, “You asked us to use this process and we used it and we showed you it works, so now give us the software or IT or other resources to make it a reality.”

“Not so fast,” I reply. “You’ve completed the first step in a long journey; what is your next most important and most uncertain assumption?”

I often hear that they don’t have any. What should they do?

As a lean agile coach I find inspiration in Eric Ries’s8 four questions:
  • Does the customer know they have the problem we are trying to solve?

  • Can we build a solution for the problem?

  • Will the customer buy the solution?

  • Will they buy it from us?

These four questions are paramount; they guide the stuck teams to develop their next assumption to validate.

Tip

People will say, if we just had that we would conquer the market, win the competition, etc. That is often the most important and most uncertain assumption to validate. It turns out, though, that they lack the next assumption to validate–the next that–and asking them the four questions broadens their perspective and they are able to brainstorm more assumptions to validate.

Other times you’ll find that teams call an assumption true too soon. They only validated one hypothesis that stemmed from that assumption; however, there are usually multiple hypotheses mapped to a single assumption and every single one needs to be validated. Take, for example, my discussion with the VP of product. I said that I experimented with 10 consumers in three groups in order to disprove his assumption that consumers want many products. It was easy to disprove since I only needed a single hypothesis, something like “if I ask 10 consumers, 9 will say that they hate the current product offering.”

Tip

You will find that it is quite easy to disprove assumptions and trickier to validate them, which is actually a good thing since it limits the upfront resource investment.

However, it doesn’t inform us what the product offering should be; therefore, I might identify the following assumption:

Our customers want simple and limited selection to quickly choose a plan.

This is as vague as assumptions get, and similar to many that you’ll run across; it is unclear what “simple” is, what “limited” means, and what “quickly” infers. The term “plan” itself is vague. Even the term “our customers” is ambiguous.

This assumption can be broken down to numerous hypotheses, each requiring its own validation plan. However, keep in mind that some of the hypothesis will be more important and uncertain than others. Some hypothesis you could solve by researching benchmarks, such as the time a customer is willing to spend selecting an insurance plan or a metric for a limited selection on the platform they are using. Naturally you’ll have to identify the customer persona as well, if you haven’t done so already. The actual important hypothesis is missing from the statement, and it is a situation you’ll run into. Let’s ask the team to rephrase the assumption.

They came up with the following:

If we offer 6 plans to our persona (platinum, gold, and silver each with two deductibles), we will be better off than what we have today.

Okay, now we’re getting somewhere, although we could invest an effort in defining what “better off” means.

Remember, we already have plans in place anyway, so our hypothesis can be a comparison between the product offerings as we always done it to an offering that is limited to six plans. Naturally we can conduct A/B testing in software to get accurate results, but what if we don’t want to invest in software, as the VP is not inclined to give us any resources. We create a mockup and head to the nearest mall. We can screen participants in our small experiment to the persona that we identified and show 10 participants a screenshot of the current system and ten others the limited options. We can then ask them a few targeted questions to get their feedback.

As mentioned, teams buckle at the idea of heading out to the mall and validating a hypothesis as a first step towards validating the assumption. All the better: they are questioning two innate organizational beliefs, namely, we know better than our customers , thus we can’t learn anything by interacting with our customer and gathering valuable data should be left to professionals, thus by asking 20 people at the mall we can’t expect to achieve anything valuable.

Hacking away at these beliefs is at the foundation of a lean agile transformation.

Best Practice

Remember, initial validations require no software code. They can be completed by a team at a local mall.

We’ve completed a full year in transformation. Let’s move to witness how it works in reality.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.178.144