10

Continuous Exploration and Finding New Features

In Continuous Exploration, product management works with people both inside and outside the Agile Release Train (ART) to find features that will provide value to the customer, explore the needs and wants of the customer, verify the feasibility of new features in the current architecture, and prepare new features to be developed by the ART.

As we can see in the following illustration, Continuous Exploration, the first stage of the Continuous Delivery Pipeline, establishes the trigger for subsequent development:

Figure 10.1 – Continuous Delivery Pipeline (© Scaled Agile, All Rights Reserved)

In a nutshell, the following activities will be discussed in this chapter:

  • Hypothesizing the customer value
  • Collaboration and research
  • Discussions about architecture
  • Synthesizing the work

The work that product management does during Continuous Exploration is important for the ART in that it helps set the context for the ART to execute the shared mission or vision. Let’s view this work that product management performs to prepare for upcoming Program Increments (PI).

Hypothesize customer value

If I asked people what they wanted, they would have said faster horses.” This quote, commonly (and possibly incorrectly) attributed to Henry Ford, highlights a problem that product managers have when looking for new features or new products. Customers may simply not know what they want or cannot imagine innovations that may come from different approaches or out-of-the-box thinking.

A way of working through the unknowns in product development is to take the approach highlighted in Eric Ries’s book, The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. In this book, Ries shows a way of collaborating with the customer for iterative product development. Some of those ways are captured in the learning cycle he proposes, the build-measure-learn cycle.

The build-measure-learn cycle is an iterative product development cycle where Lean Startups discover the product and feature qualities that resonate with their customers through experimentation. The following parts of the cycle are done with the participation of the customer:

  • Build: Often, what emerges in the first iteration is a Minimum Viable Product (MVP) – something that kicks off the learning process with the customer
  • Measure: Conversations with the customer or application of metrics determine what is working and what is not
  • Learn: Armed with the knowledge from previously collected metrics, the choice is made to pivot or persevere

The build-measure-learn cycle is illustrated in the following diagram:

Figure 10.2 – Build-measure-learn cycle

The build-measure-learn cycle follows the same cycle as the Lean Improvement or PDCA cycle discussed in Chapter 9, Moving to the Future with Continuous Learning. Building takes place in the plan and do phases of the PDCA cycle. Measuring allows us to check in the PDCA cycle. Once we’ve performed the Measure step, we learn (or adjust) by pivoting or persevering.

With the three parts of the build-measure-learn cycle identified, let’s examine the first cycle and its use of the MVP.

Building with an MVP

Creating an MVP is the initial step of the learning journey that Lean Startups take. It provides a check for these startups to determine whether they are taking the correct path. While different people have different ideas of what constitutes an MVP, including SAFe® (which we will explore later), let’s examine the definition proposed by Eric Ries in The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses.

According to Ries, an MVP doesn’t necessarily have to be an actual product or the product in its final form. The only thing necessary is that it conveys to the customer an indication of the product or service.

The following examples are cited by Ries as examples of MVPs:

  • Dropbox created a video to help customers understand the idea of cloud-based file storage and synchronization and explain the advantages it had over competitors
  • Groupon began as a WordPress blog where customers would email asking for a PDF coupon of the offering presented as a new blog post
  • Food on the Table, a startup based in Austin, started their service of aggregating ingredients from customer shopping lists and recipes and fulfilling purchases at local grocery stores by closely working in person with their first customer and defining what the service was and how it worked

These examples show that an MVP is constructed to answer the following questions:

  • What should our product or service look like?
  • Can our product or service resonate with customers?
  • Can we provide this product or service?

Startups find the answers to these questions after creating the MVP by talking with customers and applying innovation accounting to metrics that are important to them. Let’s take a look at the measures used in innovation accounting.

Measuring with innovation accounting

An MVP is created to prove or disprove what Ries calls a “leap of faith assumption.” When looking at the build-measure-learn cycle, that MVP goes through three milestones. These milestones help evaluate the progress of the MVP and illustrate good inspection points for learning:

  1. Establishing the baseline: The creation of the MVP is a baseline used to see whether the “leap of faith assumption” is valid. This assumption can also be thought of as a hypothesis that an experiment (the MVP) can validate.
  2. Tune the engine: Based on objective data, we should look at making adjustments that move us closer to the goal.
  3. Pivot or persevere: These adjustments, based on the data we collect, may prompt us to continue the path we are on and add further refinement, or pivot. Pivoting may lead us to take different paths for the product or service.

For the objective data, we look to avoid vanity metrics. We previously discussed the criteria Ries originally wanted for his metrics in Chapter 5, Measuring the Process and Solution. The following qualities are repeated here:

  • Actionable: Does the metric show clear cause and effect? In other words, could you replicate the results by performing the same actions?
  • Accessible: Does everyone on the value stream have access to the same data and is that data understood by all?
  • Auditable: Is there credibility to the report?

Of course, the best data points will come from direct consumer feedback. Often, in many situations, if you cannot understand what the metrics are telling you, you may have to resort to interviewing your customers.

With the objective data collected from true metrics or customer feedback, then comes the necessary decision to ensure the continuity of the startup. Do we need to move in a different direction for our MVP (pivot) or should we continue in this direction and add more enhancements (persevere)? Let’s examine the factors involved in that decision.

Learning to pivot or persevere

Based upon customer feedback or objective data, the Lean Startup has a question to answer: does the current direction of our product or service allow us to meet our goals and provide value to the customer? If it does, then we continue on the same path for that product, adding additional features. If not, we need to move in another direction, or pivot, without remorse.

A pivot may be a change to one or more aspects of the product or service. Ries cites the following pivots used to allow a product or service to provide better value:

  • Zoom-in pivot: One feature of the product or service resonates with customers far more than any other feature of the product or service. You then focus on that feature, making it the product or service.
  • Zoom-out pivot: The product or service doesn’t provide enough customer value on its own but bundled as a feature of another product or service, it may prove its worth.
  • Customer segment pivot: The product or service meets the needs of all the customer and just not of the customer the product or service was originally intended for.
  • Customer need pivot: While working with your customer, you may discover that you can provide better value through a different product or service. An example of this is Potbelly Sandwich Shops, which started as an antique store that provided food to its customers.

With most types of pivots, the product or service is changed, often to the point that it does not resemble the original product or service, but there may be some pivots where the MVP, product, or service is abandoned.

When facing the prospect of any type of pivot, it’s important to stay objective on the decision to pivot or persevere. Many startups often fail because they don’t pivot or pivot too late.

Now that we’ve looked at how we create our MVP and validate our hypothesis with the build-measure-learn cycle, let’s look at how SAFe® has adapted Build-Measure-Learn and innovation accounting into the SAFe Lean Startup Cycle, which applies experimentation to the execution of epics.

The SAFe® Lean Startup Cycle

In SAFe, an epic is a significant product development effort that is not scoped into a specific timebox. Epics describe the long-term changes an organization may want to make to a product. ARTs use epics as experiments to guide possible product development.

Note that while an epic and a project may have similar definitions, an epic is flexible with its scope. A project starts with an established start and end date and a fixed scope where all requirements must be met through the completion of tasks that build the deliverable. An epic really forms the basis for an experiment that may or may not run to completion.

An epic is described by a Lean business case, a brief document that outlines the need for the epic, possible solution alternatives, and a proposal for an experiment. The experiment is written in the form of an epic hypothesis statement that describes the proposed value, a hypothesis of the business outcomes, measurements of the experiment through leading indicator metrics, and any Non-Functional Requirements (NFRs) that may act as constraints. The MVP is included in the Lean business case as the implementation of the experiment.

The hypothesis statement of an epic sets the tone for the experiment by outlining the proposal, its potential benefits, proposed measures, and constraints. The following example of an epic hypothesis statement details a proposed pizza drone delivery service:

Epic Description

  • FOR customers that live in urban areas…
  • WHO desire quick and convenient pizza delivery,
  • THE PizzaBot 2022…
  • IS a drone-based autonomous pizza delivery system…
  • THAT easily delivers pizza from the restaurant quickly and easily.
  • UNLIKE current automobile-based pizza delivery, which is the standard,
  • OUR SOLUTION reduces overhead costs by using cheaper electricity instead of gasoline.

Business Outcomes

  • Better customer experience with quicker delivery times (and hotter pizza)
  • Lower overhead costs by reducing delivery drivers and saving on fuel

Leading Indicators

  • Reduction in average delivery time
  • Reduction in overhead costs
  • Higher NPS survey scores

NFRs

Must comply with local ordinances regarding commercial aerial drone use

Table 10.1 – Example epic hypothesis statement

An MVP in the epic differs from Ries’s definition. An MVP in this sense refers to the minimum set of features that are meant to form an initial product to be used by customers. These features would be developed by the ART.

Leading indicator metrics are used to validate the hypothesis of our experiment. They serve to tell us whether we are venturing down the correct path. We want our metrics to be actionable, accessible, and auditable so that they are not vanity metrics. We also want them to be true leading indicators, metrics that are reliable indicators of the possible value at the earliest possible moment, without waiting for trends to emerge.

The SAFe Lean Startup Cycle does model itself on the build-measure-learn cycle proposed by Eric Ries. In the SAFe Lean Startup Cycle, the ART works with the epic in the following manner:

  • Build: The MVP is developed and released to the customer
  • Measure: The leading indicator metrics defined in the Lean business case of the epic determine the response from the customer
  • Learn: Based on the leading indicator metrics, the decision must be made to persevere and continue developing features beyond the MVP; pivot, finish the epic as is, and create a new epic with a new hypothesis; or stop the epic so that no development happens beyond the MVP

The SAFe Lean Startup Cycle is shown in the following diagram, outlining the path of the epic and the possible paths that may occur based upon the validation of the hypothesis:

Figure 10.3 – SAFe Lean Startup Cycle (©Scaled Agile, All Rights Reserved)

The result of this activity is a backlog of epics. Each epic outlines an experiment containing a hypothesis statement of potential value and an MVP that allows us to carry out the experiment.

Using the SAFe Lean Startup cycle based on the build-measure-learn cycle, product management sets up a hypothesis of value and an MVP as an experiment to validate the hypothesis, but product management doesn’t do this alone. They collaborate with others to refine both the hypothesis and the experiment in the Build portion of the cycle. We will examine this collaboration in the next section.

Collaboration and research

Product management requires input from different people, each with a unique perspective on what the solution should fulfill. Good product managers know that they must work together with these people and discover the qualities that can form the basis of a benefit hypothesis or the features that an MVP must have.

Here, we will look at the two aspects good product management needs to form the MVP. The following aspects form the basis of the activities product management does to elaborate on the MVP:

  • Collaboration with customers and stakeholders
  • Research to elicit product qualities and NFRs

Let’s begin with the primary collaborations that product management coordinates.

Collaboration with customers and stakeholders

The best products emerge from teams. This is true from the early phases to the design, implementation, and testing stages – finally leading to release.

Product development collaborates with the following individuals to define the features of a product:

  • Customers
  • System architects or engineers
  • Business owners
  • Product owners or teams

Let’s examine the relationships created by the collaboration of product management and these groups.

Customers

The customer is the final arbiter of value. They are, after all, the ones for whom you are building your product. Their input is the most direct source of feedback on whether the product is meeting their needs.

In addition to customers that may not know what they want in a solution, product management must pay attention to customers that only focus on making incremental changes, as that may not contribute to the product’s long-term strategy.

System architects

System architects know the most about the product from an architectural standpoint. They understand the capabilities as determined by the enablers as well as the constraints, identified by NFRs.

As important as it is for product management to collaborate with system architects to understand the balance between new features, development for the long term using enablers, and maintenance and paring technical debt, it is just as important that system architects understand customer needs and concerns. For that reason, close collaboration between product management, system architects, and the customer is paramount.

Business owners

Business owners are the key stakeholders from the organization’s point of view. They need to make sure the solution developed by the ART aligns with the mission and the overarching strategy of the organization.

Product management collaborates with business owners to understand the prioritization of features that the ART may work on.

Product owners and Agile teams

The Agile teams on the ART do the work of developing, deploying, releasing, and maintaining the solution. A key person on each Agile team is the product owner, who acts as the content authority, helping the team elaborate stories and acceptance criteria and accept stories as done.

Because teams are closest to the work and actual implementation, their insights into product and user concerns should not be ignored. Good product managers will accept this feedback from the Agile teams.

We now know the different roles that product management collaborates with and receives feedback from. At this point, we should examine the forms that this feedback will take.

Research activities

Product management collaborates with the customer, business owners, and product owners by using the following types of research activities to gain insight into customer needs and how the product will enable value:

These types of activities can be classified as the following:

  • Primary market research with the customer
  • Gemba walks and customer visits to see the customer experience
  • Secondary market research to further delve into the customer’s mindset
  • Lean UX to establish experiments

Let’s examine each of these activities.

Primary market research

Primary market research features direct collaboration between product management and the customer. This direct collaboration may involve the following methods:

  • Focus groups
  • User surveys or questionnaires
  • Innovation games

Focus groups, surveys, and questionnaires ask direct questions about the product or service. They may inquire about possible future needs, but sometimes the customer can’t imagine beyond the short-term use of the product.

This is where innovation games come in. innovation games are several activities described in Innovation Games: Creating Breakthrough Products Through Collaborative Play by Luke Hohmann. In the book, games allow for the discovery of unspoken needs, how your customers look at success, and where your products fit with the customer.

Primary market research may often be done at the organization’s premises, but other insights can be gathered elsewhere. This is where Gemba Walks and customer site visits come into play. Let’s look at them now.

Gemba walks

Genchi genbutsu means “go see and understand” in Japanese. A Gemba walk is an activity used to experience genchi gembutsu. It was first used in the Toyota Production System and is a staple in Lean thinking. On a Gemba walk, people go to where the product is used to see the actual environment.

An example of a Gemba walk comes from The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses by Eric Ries. The design of the 2004 Toyota Sienna minivan was led by Yuji Yokoya. He had little experience in North America, the target market for the Sienna. So, he proposed an undertaking: a road trip of the provinces of Canada, the fifty states of the United States, and parts of Mexico in a current Sienna minivan while interviewing customers.

Yokoya discovered that North American customers took more long-distance car trips than customers in Japan. Another finding was that minivans needed to cater to the passengers that typically occupied two-thirds of the vehicle: the kids. Based on this data, Yokoya added features that had more kid appeal and would accommodate long-distance trips.

The selection of these features had a profound effect. Sales of the 2004 Sienna model were 60 percent higher than the previous year’s model.

Secondary market research

Secondary market research is activities that do not involve direct collaboration with the customer. These activities can help get an understanding of the customer and the market.

Some of the activities that allow you to learn about a customer’s wants and needs include the following:

  • Creating a persona, a fictional representation of your customer
  • Understanding your customer’s thoughts and emotions using empathy maps
  • Examination of your customer’s journey, including sentiments, using journey maps

These artifacts can be refined and amended when meeting with the customer when available.

Lean UX

When developing features, we want to use a similar PDCA learning cycle as Build-Measure-Learn. An incremental learning cycle such as this one helps us refine our epics into features.

One cycle of this kind comes from the book Lean UX: Designing Great Products with Agile Teams by Jeff Gothelf and Josh Seiden. In the book, they talk about Lean User Experience (Lean UX), a mindset and process to incrementally discover product features and validate customer value.

Scaled Agile has adapted the process model for use beyond User Interface (UI) and User Experience (UX) teams. The following diagram from Scaled Agile illustrates the process:

Figure 10.4 – Lean UX process diagram (© Scaled Agile, Inc. All rights reserved)

Let’s examine each of the steps of the process.

Constructing a benefit hypothesis

It is impossible to know, at the start of development, what features will delight the customer in the face of unknowns in the environment and risks. The first part of this incremental design cycle looks to establish a hypothesis of the intended measurable business result of the feature if the feature is developed and released. This benefit hypothesis may be related to an epic’s hypothesis if this feature is part of an epic’s MVP or is part of further development of the epic.

Collaboratively working on the design

With a benefit hypothesis in hand, it is up to members of the ART (product management, system architects, business owners, product owners, and the Agile teams) and the customer to work together and generate artifacts that work as design elements for the product.

Building the Minimum Marketable Feature

A Minimum Marketable Feature (MMF) is the minimum amount of functionality a feature contains to prove or disprove a benefit hypothesis. The ART may iteratively implement this so they can learn about their progress toward the benefit hypothesis.

Sometimes, the MMF may be a lightweight artifact with no functionality created to generate customer feedback, such as a prototype or wireframe. Other times, the MMF may be developed and released so customers can evaluate and give their feedback.

Evaluation

The MMF is released and we wait to see how the customer reacts. We can collect objective data through observation and A/B testing. We can also query the customer through surveys.

Based on the data, we can decide whether or not the benefit hypothesis was proven. This may allow us to continue development, refactor, or even pivot to abandon the feature.

The results of the collaboration and research activities allow us to understand our customer’s needs and to design the features to those needs. The following artifacts may be generated by this activity:

  • An understanding of the customer needs
  • Style guides
  • Logos
  • UI assets
  • Prototypes
  • Mockups or wireframes
  • Personas
  • Customer journey maps

As product management works to understand the features of the product, the system architect needs to understand the product’s architecture and which enablers are required to keep the product features flowing. Let’s examine the system architect’s role in Continuous Exploration.

Architecting the solution

As the maintainer of a product’s architecture, the system architect keeps track of the capabilities of the product and enhances them through the creation of enablers and understands the constraints of the system as identified by NFRs.

Working with others on the ART or others in the organizations, the system architect will explore the following aspects of the product to ensure that NFRs are satisfied:

  • Releasability
  • Security
  • Testability
  • Operational needs

Let’s examine these aspects in further detail.

Architecting releasability

It is often desirable to release new features to the customer at the organization’s discretion. We still want a deployment to a production environment as part of the development cadence, but the actual release becomes a business decision. For this reason, we look to separate the deployment from the release.

Architecture may play a key role in allowing the separation of the deployment from the release. Separating deployment and release relies on technology such as feature Flags, which the architecture must accommodate. These feature Flags easily allow the Continuous Deployment of new features without disrupting the current functionality when switched off. A new feature is considered released when the Feature Flag is switched on. Applications of Feature Flags include canary releases and dark launches. An architecture with loosely coupled components allows each component to have its own separate release schedule. This allows components that require different release strategies to have them.

Ultimately, release strategies are usually related to an organization’s strategy or business objectives. Releases may need to be done to respond to the marketplace or outmaneuver a competitor. The ability to allow for a flexible release can be a competitive advantage. We will see the benefits of this foresight in Chapter 13, Releasing on Demand to Realize Value.

Designing security

Although DevSecOps places an emphasis on shifting left to test for security vulnerabilities and concerns, the DevSecOps approach begins here as architects look to incorporate security as new features are drawn up. This allows security to be included from the very beginning, as opposed to being considered an afterthought.

To ensure security concerns are included during the initial design, the following practices are performed:

  • Threat modeling: Looking at the current product’s infrastructure, architecture, applications, and proposed features to identify possible security vulnerabilities, attackers, and attack vectors.
  • Compliance management: Ensuring that the product complies with known industry-based security regulatory standards, such as the HIPAA, FedRAMP, and PCI. The requirements in these standards primarily deal with security and privacy.

Outputs from these practices are usually maintained and communicated to the ART as NFRs. NFRs are constraints on the work that the ART develops as features. These constraints should be part of the continuous testing suite as development proceeds.

Ensuring testability

Testing is the primary way of ensuring the correct function, quality, security, and readiness for deployment of a feature. If a solution is not testable, the ART has no idea of its progress or whether value can be captured.

We saw before that a loosely coupled architecture allows for flexibility through the ability to release components on different schedules. This flexibility extends to designing systems that allow more testing to occur. Systems whose components have well-defined interfaces allow for more frequent system-level testing by having components that are not ready for integration with other components, replaced with dummy code or logic that returns valid outputs.

Existing legacy architectures can evolve to a modern loosely coupled Application Programming Interface (API)-based architecture by adopting the Strangler pattern. The Strangler pattern, as coined by Martin Fowler, takes a legacy monolithic system and establishes a facade interface to those entities communicating with the legacy system. New code is incrementally written to replace pieces of the interface. Eventually, all the functions of the legacy system are replaced by the new code.

Another design aspect to consider for testability is the ability to execute automated tests at various levels from individual code functions to user stories – and ultimately to features. Automated testing allows for tests to be executed more frequently. Frequent testing allows for greater confidence in the quality and security of the system.

Testable architecture has benefits for the Agile teams that do the implementation work. Test-driven development (TDD) looks to create the tests first, forming an initial understanding of the behavior of the system. Behavior-driven development (BDD) continues the understanding of the system’s behavior at higher levels.

Maintaining operations

An architecture’s design considerations should not change once the feature has passed all its tests. The system architect must ensure that the architecture operates easily in non-production or staging environments, as well as the final production environment.

The first aspect of this is measurability. The architecture should allow the staging and production environments to monitor its resources to determine whether any adverse performance is present with the active system. Alerts should be designed when thresholds are violated. This ability to measure resource use from low levels to higher levels is commonly referred to as full-stack telemetry.

Another aspect is the ability to record all measures into logs for easy retrieval when incidents occur. Architects should understand what aspects of the architecture can yield insights captured as logs so that operations personnel can add these measurements into a logging tool that includes timestamping and search capabilities.

Releasability plays a role in ensuring that architecture allows for easy operation. Feature Flags, a common tool for separating deployment into production environments and the release to a customer, may work as a mechanism for rollback by deactivating the affected feature in the case of an incident. Another rollback mechanism to consider is the establishment of blue/green deployment in production environments. Reliable automation in the CI/CD pipeline, including robust automated tests, can enable fix-forward situations where incident fixes can be rolled into production as soon as possible.

Outputs of the architect’s work include solution intent, an idea of the minimum architecture needed to prove the benefit hypothesis of an epic, and the NFRs, which serve as constraints on all features and stories that come from the epic.

At this point, the initial thinking about the feature in terms of thoughts on the value and effects on the architecture has been performed by product management and the system architect. Others must bring their contributions into play, from further refinement and prioritization to getting the feature ready for PI planning. Let’s examine those steps now.

Synthesizing the work

Product management has gained knowledge and collected additional research on the customer’s needs that may contribute to anticipated value. The system architect has looked into ensuring that architectural support already exists or may be forthcoming in an upcoming enabler. Product management collaborates with business owners, product owners, Agile teams, and others to complete the following activities:

  • Complete the definition of the feature
  • Use BDD to outline the acceptance criteria
  • Prioritize the feature on the Program Backlog using Weighted Shortest Job First (WSJF)
  • Prepare for PI planning

The goal of synthesis is to ready the ART for the upcoming PI. For this, we will create the following artifacts:

  • A clear vision of what the ART will develop
  • A roadmap that details the product’s evolution by showing when possible solutions will be delivered
  • A backlog of defined features

Let’s examine these activities done during synthesis in detail.

Completing the feature

Remember that we started with epics that were significant product efforts conducted as experiments. We started our experiment by developing the MVP. The MVP will be a set of features that validates the benefit hypothesis of the epic. Using Lean UX, we further define the MVP into MMFs that initially began as a what-if conjecture after learning about the customer’s needs and wants. Research from product management and the system architect added more detail. It’s now time to complete the refinement so the rest of the ART can take over.

A good feature will have the following three parts: beneficiaries, a benefit hypothesis, and acceptance criteria. Let’s discuss these in greater detail.

Beneficiaries

While we have been thinking about the value to the organization or the customer, we now must consider the end user of our product or service. Sometimes, the end user is not the same person that purchases the product or service. In these cases, we may need to challenge customer assumptions of what the needs and wants of the end user are.

Benefit hypothesis

We want to include the benefits to the customer or organization if we develop and release the feature. So, we can create our benefit hypothesis using the following format:

If {proposition}, {then benefit}

Our proposition is a description of the feature. The benefit is the expected value delivered.

We may want to include the metrics we believe will prove or disprove our benefit hypothesis. In that case, we may use the following format to include our measures:

We believe that {proposition} will lead to {benefit} and this will be proven when {metric}.

As an example, let’s take the epic for the aerial pizza drone that we defined earlier in the chapter. A possible feature could be the addition of a built-in heating unit to keep the pizzas warm while delivering the pizzas. The following statement could act as the benefit hypothesis for the feature:

We believe that a heating unit built into the drone will lead to increased customer satisfaction by ensuring that pizzas are not delivered cold and this will be proven when we view NPS survey results.

Remember here to avoid vanity metrics – measures that yield positive reactions but often don’t indicate whether the value is really realized.

Acceptance criteria

Acceptance criteria are measures that confirm the implementation is complete and the benefit is delivered. They are a statement of the system’s behavior with the feature included.

Typically, to gauge whether a benefit hypothesis is proven, you can map any number of acceptance criteria to a single benefit hypothesis statement.

Because the acceptance criteria describe the system behavior, we will examine a way of writing the acceptance criteria using BDD techniques in our next section.

Writing acceptance criteria using BDD

In the previous section, we looked at the function of acceptance criteria. One key role acceptance criteria play is that they describe the system behavior with the feature’s inclusion. This rolls down to the desired behavior of components in the form of user stories or code functions.

We specify the desired behavior in the Gherkin format. The Gherkin format uses the following structure:

GIVEN (the initial conditions)
WHEN (an input that triggers the specific scenario occurs)
THEN (the desired behavior happens)

WHEN and THEN may have one or more clauses that describe the conditions and behavior accordingly.

Extending our example of the built-in heating unit feature for our pizza drone, we may want the following statement to act as acceptance criteria:

GIVEN the heating unit is installed and warmed up…

…WHEN a pizza is placed in the heating unit while on a delivery run…

…THEN the pizza will be hot when it reaches its destination.

Acceptance criteria in this format can be used as the basis for acceptance tests that can be automated. These automated acceptance tests are executed using Cucumber, a testing tool that runs BDD tests.

Prioritizing using WSJF

Once product management has specified the feature in terms of its beneficiaries, benefit hypothesis, and acceptance criteria, it is placed in the Program Backlog.

The Program Backlog is the list of features that the ART can work with. Because the ART is limited in terms of how many features it can handle at once, it’s important that product management prioritizes the features to make the work more focused.

There is a variety of criteria that can be used to establish the prioritization of the features in the Program Backlog. SAFe advocates looking at Principle 1: Take an economic view, which we originally saw in Chapter 2, Culture of Shared Responsibility, when sequencing the features.

We start our economic view by focusing on the Cost of Delay (CoD) of the feature. The CoD is how much the value will diminish if we delay the release of the feature. It may be easier to look at the CoD as being made up of the following factors that can be relatively estimated:

  • User Business Value: How much value is anticipated to be generated by a feature in comparison to the other features on the Program Backlog?
  • Time Criticality: If a feature is not implemented promptly, is there a drop in value? Do we miss an important market window?
  • Risk Reduction or Opportunity Enablement (RR/OE): Is there another important way that a feature may reduce risk or expose the organization to new markets?

Another factor SAFe focuses on is the size of the job. We want to focus on the shortest jobs first. It’s easier to release jobs that take less time and immediately receive value, rather than starting with larger jobs.

SAFe then combines the focus on the CoD and the size of the job into a formula called WSJF. This formula can be specified as follows:

If we look at the preceding formula and substitute our CoD factors, we can rewrite our formula as follows:

Product management can collaborate with business owners, the system architect, and other stakeholders in refinement sessions to determine the WSJF value for each feature. The participants determine the relative value of the four components (User Business Value, Time Criticality, RR/OE, and Job Size), often using values in a modified Fibonacci sequence (1, 2, 3, 5, 8, 13, 20, 40, 100).

Let’s look at how product management collaborates with other parties to calculate the WSJF value for different features.

The group will convene and look over the set of features. They will then decide which feature has the smallest User Business Value – that feature has its User Business Value marked with a 1. They then look at the other features and decide how their User Business Values compare to the reference feature. If they are considered to be the same, they also receive a 1. If the other feature is bigger, they place a number on the Fibonacci sequence that describes how much bigger than the reference they are.

The following table illustrates the collaborative process of determining the User Business Value in progress:

Feature Name

User Business Value

Time Criticality

RR|OE

Cost of Delay

Job Size

WSJF

Built-in heating unit

8

Drone defense system

1

Repulsor-based thrusters

5

Table 10.2 – Example of WSJF collaboration – User Business Value defined

The process is repeated for Time Criticality, RR/OE, and Job Size. For the preceding table, on the columns that represent User Business Value, Time Criticality, RR/OE, and Job Size, there must be at least one 1. The following table continues our example with Time Criticality, RR/OE, and Job Size also defined:

Feature Name

User Business Value

Time Criticality

RR|OE

Cost of Delay

Job Size

WSJF

Built-in Heating Unit

8

8

1

1

Drone Defense System

1

13

3

2

Repulsor-based thrusters

5

1

1

5

Table 10.3 – Example of WSJF continued – Time Criticality, RR|OE, and Job Size defined

The CoD is calculated by adding User Business Value, Time Criticality, and RR/OE together. WSJF is calculated by dividing the CoD by the Job Size. The following table completes the determination of WSJF for our features:

Feature Name

User Business Value (UBV)

Time Criticality (TC)

RR|OE

Cost of Delay (CoD) = UBV+TC+RR|OE

Job Size (JS)

WSJF (CoD/JS)

Build-in Heating Unit

8

8

1

17

1

17 (1st)

Drone Defense System

1

13

3

17

2

8.5 (2nd)

Repulsor-based thrusters

5

1

1

7

5

1.4 (3rd)

Table 10.4 – Example of WSJF determination – complete

Regular refinement sessions of new features in the Program Backlog allow product management to understand the features to be completed first. This will influence the program vision and the roadmap that product management will communicate to the ART in PI planning.

Preparing for PI planning

With a prioritized Program Backlog, product management and the rest of the ART can understand the initial scope of work for the upcoming PI. Ideally, these activities happen at least a month before the actual PI planning event so that any surprises are found early.

Business owners will prepare a presentation that will help the ART understand the business context and speak to how the development of the selected features may line up with the overall business strategy.

The system architect will also prepare a presentation outlining the important architectural notes and changes. This may also include any important enablers that the ART is developing for release.

What remains are the outputs of synthesis. Product management works to assemble the following artifacts before PI planning:

  • A vision of what solutions the ART can deliver
  • A roadmap of features and where they fit into the product’s evolution
  • The list of prioritized features that the ART commits to developing in the upcoming PI

Product management outlines the program vision, intending to set the focus of the ART on a common mission. This vision will be communicated to the entire ART in a presentation made after the business owner’s presentation.

The selected group of features intended to be worked on in the upcoming PI is communicated to the Agile teams on the ART. The teams can choose which features they may want to contribute to and begin looking at these features closely. They may want to start breaking those features down into stories, as we will see in Chapter 11, Continuous Integration of Solution Development, and may also look for any risks or dependencies.

Summary

In this chapter, we looked at the activities that trigger the Continuous Delivery Pipeline. With Continuous Exploration, the ART examines the marketplace and the wants and needs of its customers to generate ideas for new products or new features.

The process begins with the understanding that development is really a series of incremental build-measure-learn cycles that start by creating a hypothesis of the customer value that can be achieved.

The collaborative process then begins for product management and the system architect. Product management collaborates with the customer to gain a greater understanding of the customer and marketplace. The system architect researches the hypothesis to understand the impact that the changes will have on the architecture.

When research is complete, the ART will take what it has learned and create features. They will elaborate on these features, place them into the Program Backlog, and prioritize them. A set of the highest sequenced features will be selected for the upcoming PI. Those selected are communicated to the ART to prepare for the next phase of PI planning.

After PI planning, the ART sets to work on development. This development work starts on the next phase of the Continuous Delivery Pipeline, Continuous Integration. We will explore what Continuous Integration involves in detail in our next chapter.

Questions

  1. Which is not an activity of Continuous Exploration?
    1. Hypothesizing
    2. Collaboration and research
    3. Development
    4. Synthesis
  2. Who does product management collaborate with during Continuous Exploration (select three)?
    1. Release train engineers
    2. Customers
    3. Product owners
    4. Scrum masters
    5. System architects
    6. Solution train engineers
  3. Innovation accounting is used in which of the phases of Build-Measure-Learn?
    1. Build
    2. Measure
    3. Learn
    4. All phases
  4. The Lean UX process creates a __________ to prove or disprove a benefit hypothesis.
    1. Most Valuable Product
    2. Mean Measurable Feature
    3. Minimum Viable Product
    4. Minimum Marketable Feature
  5. Which areas does the system architect focus on when evaluating architectural changes for a new Feature (select three)?
    1. Security
    2. Cost
    3. Testability
    4. Reuse
    5. Releasability

Further reading

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.89.173