© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
M. BreyterAgile Product and Project Managementhttps://doi.org/10.1007/978-1-4842-8200-7_8

8. Incremental Delivery and Continuous Improvement

Mariya Breyter1  
(1)
New York, NY, USA
 

This chapter covers delivery, reporting, and continuous improvement. It provides examples of Agile metrics for Scrum and Kanban teams and reviews software delivery and product satisfaction metrics. It discusses team empowerment, feedback loops, and Retrospective techniques. In addition, it discusses the concept of a product life cycle and how it affects incremental delivery.

In Chapter 7, we covered the topic of estimation and planning in Agile. We discussed three planning horizons (short-term, medium-term, and long-term planning) and reviewed the approach that is used to estimate effort and plan delivery. We covered story point estimation and discussed why this type of estimation is more accurate than a calendar-based one and how to convert story point estimations into predictable roadmaps. We also covered multiple concurrent levels of Agile planning (referred to as “Agile onion”) and explained how to establish predictability of delivery in the Agile environment.

In this chapter, we are going to move into execution and discuss how incremental delivery reduces risks and enables fast feedback loops from the customers. We will also cover how the feedback – internal and external – can be used to refine and enhance execution and shape delivery models. In order to encourage open feedback, we will review multiple techniques for conducting Retrospective sessions. We will cover Lean concepts of quality and quality-related outcomes. Finally, we will review incremental delivery from the product life cycle perspective. Throughout this chapter, we will reemphasize Agile culture, that is, team empowerment, accountability, and customer-centricity.

Incremental Delivery

First, we need to define incremental delivery. It is important to distinguish between incremental delivery and continuous delivery. Continuous delivery is the process when features deployed into production are released incrementally or immediately to customers based on business needs (market demand, seasonal release dates, business timelines, or other reasons). Continuous delivery should not be confused with continuous deployment, which means that any change is automatically deployed to production by delivering it to a production-like environment. Continuous delivery allows for business decisions related to making specific functionality available to the customer.

Incremental and iterative delivery supports feature-based delivery vs. a “big bang” phase-based “Waterfall” approach. Prioritized features are made available to end users once they are developed. There is a difference between incremental and iterative delivery. Jeff Patton described it very well by showing the Mona Lisa picture [1]. He explained incremental development as incrementally building software, similar to adding bricks to a wall. For a Mona Lisa painting, it would be painting the head, then the upper body, the hands, etc. Paint-by-numbers artists work this way.

Iterative delivery means gradually building up functionality, so if development takes longer than we expect, we can release what we’ve incrementally built so far. For a Mona Lisa painting, it would mean a pencil draft, then a few basic shades, then the colors, then finishing – each of those steps requires work on all the painting, just at a different level of detail. For example, if we are building a website and need to provide logon functionality, we may start with a simple database encrypted login and further progress to a single sign-on solution.

It is important to understand that Agile is both iterative and incremental: teams deliver slices of functionality in priority order while prioritizing key functionality and deprioritizing extra features. Then we deliver this software to customers at regular intervals, called Sprints in Scrum, or continuously, which is referred to as Kanban.

Topic for a Group Discussion

What are the benefits of incremental and iterative delivery compared to a phased Waterfall delivery? How are those conceptually different? When are the scope and product features defined? Which one allows to learn from customer feedback and adjust the scope of subsequent delivery? Which companies are known for that? Share any examples that you are aware of or do the research to find up to three relevant examples.

Product Roadmap and Release Plan

In order for iterative, incremental delivery to be meaningful for the customer, the delivery team needs to be clear on what they are delivering and when they will make this functionality available to their customers. They need to provide visibility into the upcoming functionality to their current and potential customers. For product companies such as Apple, launch announcements are big events awaited by millions of their loyal customers. Prior to these events, there are always rumors and excitement from official media and their social media fans. As an example, rumors about upcoming 2021 iPhones started in 2020 and received a lot of publicity [2]. Apple frequently holds “One More Thing” events, in addition to their major product unveiling events, well familiar to all of us by Steve Jobs’ announcement of the iPhone at the Apple’s Macworld conference.

At this conference on January 9, 2007, Apple CEO Steve Jobs introduced the new iPhone, which will combine a mobile phone, a widescreen iPod with touch controls, and an Internet communications device with the ability to use email, web browsing maps, and searching. He informed that iPhone would start shipping in the United States in June 2007. As we can see from this example, products are usually introduced significantly prior to being released. This requires delivery teams – software and hardware – to be able to predict when specific features, devices, or network configurations will be completed and become available to the end user. This information governs marketing, sales, customer support, and multiple other customer-facing events and activities, and predefines the market strategy for the product or the whole product line.

This means that IT teams and organizations need to be highly predictable about what and when is being delivered. Imagine what would happen if, in the preceding example where Steve Jobs made a promise to deliver iPhones in June 2007, the phones would not pass testing and delivery would be delayed until 2008. Apple’s reputation and the trust that the company has with its customers could be damaged with a long and painful path to recovery. Apple’s track record of predictability of delivery is stellar. It does not mean that they deliver on expected timelines at a 100% rate or that each of their products is commercially successful.

Release planning is a longer-term (usually one quarter and longer) planning that enables delivery teams to answer the questions when specific features or new products are going to be delivered, or released, to the customer. The questions customers are asking may vary from “When will this be ready?” to “Which features or products will we get by the end of the quarter? End of the year?” Business stakeholders will ask: “How much will it cost? How will it impact other products? Who will work on this?” In a nutshell, release planning involves product delivery, capacity planning, budget allocation, dependency management between multiple teams and frequently parts of the large organization (e.g., software and hardware for smartphones), prioritization, alignment, and many other processes that allow for smooth and synchronized delivery.

In different project management methodologies, release planning is done in different ways. The frequency of releases varies as well. In many Scrum organizations, new product features are released to customers every Sprint (two to four weeks). In some large engineering or pharmaceutical organizations, there are longer, monthly, and even quarterly releases. Even for Scrum organizations, many teams group the results of multiple Sprints into one release or do the opposite: release continuously as soon as a feature is completed, which is known as continuous delivery. For large organizations that practice scaled Agile frameworks, discussed further in Chapter 10, there are structured activities such as PI (program increment) planning or a Big Room Planning. Both terms refer to a cadence-based (usually quarterly) event that aligns all teams within a large program to a shared release plan.

Some of these release planning approaches are described in Figure 8-1.
Figure 8-1

Most common release planning scenarios

 Topic for a Group Discussion

Compare scenarios 1, 2, and 3 described in Figure 8-1. What type of companies and products would benefit from each of the scenarios? Complete scenario 3, which is a hypothetical example for customer-based and market-based releases, with the examples that you are aware of (or research on the Internet). Describe each of the scenarios in your own terms.

Scenario 1 shows a release plan where each deployment at the end of the Sprint delivers value to customers. This happens with startups or smaller products without major dependencies. The second scenario shows a cadence-based (in this case, quarterly) release plan, which is suitable for major endeavors, such as commercial engineering products of a significant scale. The third scenario utilized by most consumer-driven technology product companies is based on customer demand and market triggers. However, even in this case, there are specific patterns. For example, Apple maintains a cadence of announcing new iPhones around the second or third week of September and releasing them one or two weeks later.

 Tip

It is important to understand the difference between production deployments of software and release of software to the customer. Similarly, the product deployment or manufacturing dates (for hardware) are not the same as release dates. Multiple deployments usually lead to a release of an IT product or a software-powered device. Ten or fifteen years ago, many organizations did not distinguish between production deployments and releases of their software to customers; however, these days are long gone. There are multiple software release models that gradually transfer user traffic from a previous version of a website, an app, a database, or a microservice to a near-identical new release – both of which are running in production. By swapping alternating production environments or toggling specific features, delivery teams fully decouple production deployments from customer releases.

Release plans are created based on a combination of two primary parameters: capacity and scope. While the terminology is different in Agile and Waterfall, the approach is the same: the capacity of the delivery team (assuming that this is a long-standing dedicated team or a group of teams) needs to be aligned with the scope of delivery. Agile provides a mechanism for alignment and flexibility of planning, which makes release planning much more accurate. Predictability can be based on a structured planning effort where all features are being discussed, and dependencies and risks identified and aligned quarterly, such as in Scaled Agile Framework (more on this in Chapter 10), or a lightweight quarterly planning effort where high-level features are estimated using T-shirt sizes and pulled into releases based on prior velocity. For example, for each Sprint, the team delivers one large, three medium-size, and six small features on average, or three large features, or two large, one medium, and five small, and so on. Whichever way of release planning an organization chooses, it is important to apply it consistently: days, story points, and T-shirt sizes are effective only if those are based on empirical data when there is a historical record and an established baseline. In Waterfall, estimations are usually provided by tech leads in days and sequenced on the calendar.

A lot of factors influence how each organization does release planning – some organizations focus on feature delivery, and some more traditional organizations focus on value predictability. The mandate from the leadership frequently defines the thoroughness and the overall effort invested in planning. Program increment planning in SAFe allows for higher predictability for larger organizations with less flexible products, but as soon as anything changes, the plan requires major rework. A more flexible T-shirt-based estimation approach does not take as much time but does not prevent surprises where seemingly simple features become gigantic and super complex, some features get overlooked, or sequencing is off because a major dependency has been missed. In most cases, organizations try several ways of planning for their releases until they come up with one that works best.

Once the release plan is created, product managers publish the roadmap for each of their products. The roadmap shows which features are going to be delivered and when. It is usually targeted toward internal stakeholders. However, many companies proudly display their roadmaps to the customers to be absolutely transparent and to receive immediate feedback.

We define a product roadmap as a shared source of reference for a product that outlines the vision, direction, priorities, and deliverables (functional or nonfunctional) of a product over time. Many organizations use product roadmaps as a means of alignment across the company in terms of short- and long-term goals for the product. In Agile software development, a roadmap provides the context for the team’s or program’s everyday work while pivoting based on the market and customer needs. Multiple Agile teams may share a single product roadmap if they work on the same product. A sample roadmap is provided in Figure 8-2.
Figure 8-2

Example of a product roadmapc based on quarterly planning

The product roadmap is not set in stone, and it is changing based on new information, customer feedback, competing priorities, and multiple other factors. While the usual time frame for a product roadmap is one year, it may differ based on products and companies. While the roadmap seems straightforward, it is a complex strategic tool that involves thoughtful planning, analysis, research, prioritization, dependency management, alignment, risk analysis, and numerous discussions. If done well – in a collaborative and thoughtful way – the roadmap impacts alignment, buy-in, strategic prioritization, and product success overall.

The product roadmap is aligned with the release schedule; however, it serves a different purpose of explaining to the business and to the customer why a specific direction has been chosen and how the product is going to evolve over time. The product roadmap is owned by a product manager (Product Owner in Scrum); however, it is a result of alignment with the business and the broad group of stakeholders, and its goal is to work backward from customer needs to innovate on behalf of the customers.

There are multiple software tools – Aha!, Asana, Productboard, monday.com, Roadmunk, Jira Plan, and many others that provide advanced road mapping functionality [3].

 Topic for a Group Discussion

We established that while the product roadmap is a simple artifact, it’s a result of comprehensive research, analysis, and discussions. High level of complexity leads to the failure of product roadmaps. To be effective, a roadmap needs to accomplish a number of strategically important tasks, such as communicating product strategy and vision, prioritizing delivery, creating expectations on timeline and validating those to ensure feasibility, empowering the teams to define how they will accomplish these strategic goals, creating a single point of reference for business and delivery teams, and many others. The goal of this group discussion is to address multiple factors that may go wrong. We will start by listing a few: roadmap was created but it was not communicated, roadmap is based on unrealistic expectations and forces people to work around the clock to meet the deadlines, and so forth. For each of the reasons for failure you come up, discuss the impact and the way it could be avoided.

Once the roadmap is established, there are specific expectations set with the business and the customers related to feature and product delivery. With multiple planning horizons, we are looking at commitment near term (upcoming Sprint in Scrum or one to three months for multiteam programs) and forecast longer term, since a lot of changes and discoveries are made outside of the quarterly time frame. Self-managing teams organize and plan their delivery, and this level of empowerment comes with a similar level of accountability for delivery.

The question that comes up next is the following: How do we know that we are executing the right features at the right pace, and how are we progressing toward executing the roadmap overall? To accomplish this purpose, companies use a set of metrics.

Agile Metrics

In traditional project management, there are usually not that many metrics involved since delivery progress is driven by status updates. Every activity is shown in a specific color (the default is RAG – red, amber, green, or “traffic light colors”) based on its status. Green if on track, amber if there is a risk or delay, and red if it cannot be delivered as planned. Once the impediment is resolved and the timeline is adjusted, the activity goes back into green or amber status. This status metric is very straightforward, so you may be confused as to why it is not sufficient or what is wrong with it. The best way to answer this question is to state that all we can do with the status is to worry.

Amber or Red statuses provide an important signal; however, this signal is not actionable. Instead of preventing the issue by adjusting expectations or changing the release plan, status provides visibility into the current state but does not answer many fundamental questions. RAG status updates are known for the surprises they frequently bring when the status is green for many months in a row and then it suddenly changes to red. It is referred to as a “watermelon status.” It is all green on the surface, but once you cut deeper, it is all red inside.

In Agile, there is a lot of data being collected: number of user stories delivered during a Sprint, number of story points being completed by a team (team velocity), number of defects during a Sprint, and many other numbers. However, this is all being referred to as “vanity metrics,” which do not provide any meaningful or actionable information and can, in fact, be quite damaging.

Instead, the most useful approach is to collect the metrics that are important to the company. For example, if there is an issue with meeting predefined timelines, then it is a good approach to measure predictability of delivery. In Scrum, a frequent metric for that is the “commitment rate,” which is a percentage of work delivered over work committed, in story points. For example, if a team committed to delivering 100 story points worth of work in one Sprint and delivered only 80, the commitment rate equals 80%. Anything within the range of 80120% is considered healthy (software delivery may be quite unpredictable; however, velocity keeps adjusting every Sprint to make delivery as predictable as possible). Anything outside of this range requires a thorough analysis of the reason for the deviation and potential action items to prevent this from happening in the future. Some teams continuously overcommit and underdeliver – sometimes the reason is their fear to say “no” to increased demands. Some teams continuously overdeliver – this is also not optimal because it is hard to predict delivery in this case. In both cases, it makes sense to discuss the lack of accuracy in a Retrospective, identify potential root causes, and agree on the action plan to increase predictability.

According to the Agile Manifesto, the primary measure of progress in Agile is working software. As a result, the number of user stories delivered per Sprint or epics delivered over a longer period of time is a valid metric. However, the complexity of stories differs significantly, so either this metric is used over a large number of teams or a long period of time or there is a T-shirt sizing of stories, which helps establish higher accuracy with “translation” rules between small, medium, and large stories or features.

Since the software is delivered to customers, the most important subjective metric is customer satisfaction. Comprehensive web analytics is the key to understanding customer behaviors in using the software. Companies such as Google are driven by web metrics. The following are some examples of web metrics in Google Analytics:
  • Sessions: Measures the volume of visits to the website

  • Users: Measures unique visitors to the website

  • Page views: Measures the total number of pages viewed on the website

  • Average time on page: Measures the amount of time (on average) users spend on the website

  • Bounce rate: Measures the percentage of sessions that leave the website without taking any additional action

  • Entrances: Measures the entrance points (i.e., your homepage, pricing page, etc.) users visit the website through

  • Exit rate: Measures the rate at which visitors leave the website from specific pages [5]

Subjective customer data are also extremely important. Through customer observation, pop-up satisfaction screens, customer satisfaction surveys, and NPS (Net Promoter Score), companies collect data to align their software with customer needs.

In addition, since it is important to maintain and develop the business, product success and, specifically, P&L (profit and loss) data are very important. These data indicate commercial success (or failure) of a product.

There are also so-called vanity metrics, introduced in Chapter 4, which are frequently collected but are not directly actionable. For example, team velocity (cumulative number of story points delivered within a Sprint) is a helpful metric for a team that allows them to plan the next Sprint but is not informative or actionable outside of the team. Moreover, velocity is a relative metric, so comparing velocity between teams is meaningless. At the same time, velocity at the team level indicates team productivity, so capturing velocity trends within one team is informative. For example, if a team delivered 100 points in Sprint 1, 110 points (given the same composition and time off between its members) in Sprint 2, and 121 points in Sprint 3, the team is consistently increasing its velocity by 10%. On the contrary, if a team is losing velocity, there is a reason to perform root cause analysis and find out what is wrong and how to support them.

Another common parameter that companies care about is quality. There is little value in measuring defects within a Sprint since any defects are immediately corrected as part of the collaboration between developers and testers (or in the case of CI/CD and test automation, developers become aware of defects upon integrating their code and fix the defects right away). However, production defects that leak to the customer are extremely important, and the company’s reputation depends on that. As a result, measuring the number of production defects (or so-called “escaped defects”) is very important, especially if it is presented as a trend over time.

 Topic for a Group Discussion

Consider a company that struggles to deliver software on time and is having multiple customer issues related to software quality with each release. Which three to five parameters would you suggest them to track on a consistent basis?

 Tip

For each company, it is important to identify data being collected, implement software to eliminate manual data collection, and make these reports transparent to the organization. While publishing this data (usually as a dashboard representing live data), it is important to never use this data to threaten or shame the teams who need support; rather, it is important to provide them with help and coaching to get back on track.

In terms of software being used, Jira provides easy Dashboarding and Reporting functionality, especially in combination with scaling Agile software, Jira Align, which helps to analyze software delivery data at scale as related to organizational roadmaps, OKRs, and financials as well as manage dependencies at scale. Figure 8-3 shows an example of a Velocity Chart, prebuilt for Scrum teams in Jira. As you can see, in the first Sprint, the team significantly undercommitted, so they took more work into the second Sprint and delivered exactly as much as in the second.
Figure 8-3

Sample Velocity Chart generated by Jira (Atlassian tool)

 Topic for a Group Discussion

What happened to the team whose Velocity Chart is shown previously in the third Sprint and what could be the root causes?

A more comprehensive dashboard can be created in Jira, visualizing almost any data to help a team stay on track throughout the Sprint. Figure 8-4 shows a fragment of such a dashboard.
Figure 8-4

Fragment of a sample Jira dashboard

While velocity and commitment rate are relevant in regard to Scrum, what do the Kanban teams measure? Metrics in Kanban are focused on measuring “time to value” or “time to market” as leading indicators for market value.

Lead time and cycle time mentioned in previous chapters are the two most important Kanban metrics. They show how long work items stay in the workflow until they are completed. As discussed in Chapter 7, lead time is the total amount of time a task spends from order to delivery in the system. Cycle time is the amount of time that is spent actively working on it. It is important to understand the difference between the lead time and cycle time. The lead time starts when a new task is requested and ends when it is complete. The cycle time begins when someone actually starts working on the task, which is also known as a commitment point. We use the lead time to analyze if work items wait for too long before they are taken on. On the other hand, cycle time helps us understand the amount of time needed for the actual completion of a given task, as shown in Figure 8-5.
Figure 8-5

Lead time vs. cycle time in Kanban

Some other types of standard Kanban metrics include queue length, number of queues, and wait times. This allows to measure the effectiveness of the flow and calculate the cost of delay based on wait times and system throughput.

Standard diagrams to measure progress include the Cumulative Flow Diagram (or the CFD), cycle time control chart or average time diagram, and lead time distribution chart [6].

It is important to establish a different level of metrics collection. For example, during a Sprint, each team measures its progress toward successful delivery at the end of the Sprint. To do so, they use the burndown chart, which shows how the work progresses throughout the Sprint by subtracting the number of story points (or user stories) completed during the Sprint. The ideal burndown line is going down in a straight line from top left to bottom right (flat on weekends). This indicates a healthy project and a well-functioning Scrum team. Value is being delivered constantly in a linear way. If the burndown chart is a flat line, the team is not completing any work. Figure 8-6 shows a sample burndown chart.
Figure 8-6

Sample burndown chart

This data is relevant to the team level and not of interest to the whole organization. For an organization, it is important to focus on metrics that matter: customer satisfaction, the quality of the software being delivered, the speed of delivery, predictability, the speed of refactoring technical debt, and other organization-level reporting.

 Topic for a Group Discussion

Discuss which metrics are relevant for the company level or team level and which metrics are relevant for both.

There are comprehensive tools that measure the level of organizational agility, such as AgilityHealth [8]. AgilityHealth is a measurement and continuous improvement platform providing insights into Agile maturity.

 Five key questions to review:
  1. 1.

    What metrics are relevant in Scrum? Kanban?

     
  2. 2.

    What makes Agile metrics meaningful?

     
  3. 3.

    What is the difference between lead time and cycle time? Why are those parameters important?

     
  4. 4.

    What is the burndown chart and when is it used?

     
  5. 5.

    What are the levels of scaling metrics in an Agile enterprise?

     

Continuous Improvement and Retrospectives

Once areas of improvement are identified, the goal is to implement relevant improvements. The teams are empowered in deciding how to prioritize these improvements and which techniques to use. Every Sprint, they conduct a Retrospective, where they prioritize areas of improvement, analyze root causes, and agree on a small number (usually three to five) of action items that they want to commit to as a team during the upcoming Sprint.

After implementing these improvements for a Sprint, the team starts the next Sprint Retrospective by discussing whether the action items they’ve implemented brought them the desired results; if yes, they persevere and make these decisions a part of their process, and if not, they discuss how to pivot and try other measures. This short feedback loop allows the teams to be productive and nimble in addressing productivity, quality, and other concerns. It also allows for an honest conversation between the team members since Retrospectives are internal to the team.

There are multiple techniques that are being implemented to enable inclusive and open discussion during a Retrospective. A standard technique for Retrospective has three steps:
  1. 1.

    Silent brainstorming: Participants write their comments on two topics “What went well last Sprint?” and “What are the improvement opportunities?” on post-its (physical or virtual) and post them on the Retrospective board. Once items are posted on the board, team members group them into categories.

     
  2. 2.

    Prioritization: Each team member gets an equal number of votes and places their votes (dots) on the groups of items they would like to prioritize.

     
  3. 3.

    Action items: The whole team discusses the highest voted groups and agrees on three to five action items they’d like to implement in the upcoming Sprint to address these. Usually, each item is owned by one of the team members who continuously updates the team on their progress.

     
 Topic for a Group Discussion

Do a Retrospective on the latest class assignment you’ve done as a team. What went well? What are the improvement opportunities? What would you like to do as a team before next class to address these opportunities?

 Tip

Having the same format for Retrospectives every Sprint may become boring and unproductive. Frequently, teams use a gamified approach to Retrospectives. A popular format is “Sailboat,” where a Scrum Master or a team member paints a picture of a boat and asks questions to the group: What is a tailwind that accelerates our movement and what is the anchor that holds us back? They also analyze rocks, which are impediments preventing them from success. From this discussion and the subsequent voting, they move to the same action items conversation as in the standard format described in this chapter. There are many other formats described here: www.tastycupcakes.org/

There are many other creative formats used for Retrospectives.

These formats include “Glad, Sad, Mad,” where team members start by silently brainstorming on three topics based on their observations from the prior Sprint, describe them to each other, group, and vote, similar to the original format as shown in Figure 8-7.
Figure 8-7

A sample Retrospective format

Another popular format is “Start. Stop. Continue,” where the team reflects on three topics:
  • What should we start doing?

  • What should we stop doing?

  • What should the team continue doing?

Team members add their post-its to each category, followed by a discussion.

The foundations of Retrospectives are described by Esther Derby and Diana Larsen in their book, Agile Retrospectives: Making Good Teams Great [10].

In a remote environment, Retrospectives can be as much productive, collaborative, and fun as in physical. Virtual Retrospective boards are available in Mural, Miro, InVision Freehand, and in many other formats. There is helpful advice on making remote Retrospectives efficient and inclusive provided by multiple practitioners [9].

In summary, irrespective of the standard, the goal of a Retrospective is to align as a team in improving their process and delivery outcomes and commit to actionable items to achieve those improvements.

Product Life Cycle

From the product perspective, it is important to consider the product life cycle as part of the continuous improvement process. The life cycle of a product is associated with development, marketing, and investment decisions within the business. All the products go through five primary stages: development, introduction, growth, maturity, and decline. These phases are repeatable in an Agile environment within the course of the use of the product. However, from a macro perspective, each product goes from its beginning stages all the way through its decline and eventual retirement.

This means that we should not be investing in a product that is nearing its retirement in terms of new features and drastic improvements in its functionality. We should, though, ensure that it provides a continuous, high-quality experience to the customers.

Product life cycle management (PLM) process manages the entire life cycle of a product from inception, through inception and high-level design and manufacture, to service and sunsetting of the manufactured products. PLM is a key process within the information technology structure, so while the processes continuously improve, product development and research reflect the phase the product is in.

In sum, continuous improvement is a must for modern businesses. Improvement is related to continuous process improvement in Scrum via Retrospectives. In addition, improvement is being made to products and services in a different manner, with consideration of the product life cycle practices.

To summarize, incremental and iterative delivery is a must for a modern business. It allows to align delivery with business needs, focus on solving customer issues, and align deliverables within a large organization. However, it is not a “one-and-done” effort; the goal of an Agile enterprise is to continuously improve and pivot if the customer or business expectations are not met. From a product and team perspective, Agile delivery is a continuous learning and thoughtful improvement every step of the way.

Key Points

  1. 1.

    Incremental and iterative delivery allows feature-based delivery vs. a “big bang” phase-based Waterfall approach. Prioritized features are made available to end users once they are developed.

     
  2. 2.

    In order for iterative, incremental delivery to be meaningful for the customer, the delivery team needs to be clear on what they are delivering and when they will make this functionality available to their customers. They need to provide visibility into the upcoming functionality to their current and potential customers.

     
  3. 3.

    Release planning is a long-term (usually one quarter and longer) planning that enables delivery teams to answer the questions when specific features or new products are going to be delivered, or released, to the customer. Release plans are created based on a combination of two primary parameters: capacity and scope.

     
  4. 4.

    A product roadmap outlines the vision, direction, priorities, and feature delivery of a product over time. The roadmap defines the plan of action that aligns the organization around short- and long-term goals for the product. In addition, it contains high-level information on how these goals would be achieved. In an Agile environment, a roadmap provides primary context for everyday work while pivoting based on the market and customer needs. If multiple Agile teams work on the same product, they share a roadmap.

     
  5. 5.

    To ensure that teams are executing the right features at the right pace and assess how they are progressing toward executing the roadmap overall, companies use a set of Agile metrics.

     
  6. 6.

    The most useful approach to Agile metrics is to collect the metrics that are important to the company. For example, if there is an issue with meeting predefined timelines, then it is a good approach to measure the predictability of delivery.

     
  7. 7.

    The primary measure of progress in Agile is working software. As a result, the number of user stories delivered per Sprint or epics delivered over a longer period of time is a valid metric. Other parameters include quality and customer satisfaction. Another important metric in Scrum is the “commitment rate,” which is a percentage of work delivered over work committed in story points.

     
  8. 8.

    Lead time and cycle time are the two most important Kanban metrics. They show how long work items stay in the workflow until they are completed. Lead time is the total amount of time a task spends from order to delivery in the system. Cycle time is the amount of time that is spent actively working on it.

     
  9. 9.

    The teams are empowered in deciding how to prioritize these improvements and which techniques to use. Every Sprint, they conduct a Retrospective, where they prioritize areas of improvement, analyze root causes, and agree on a small number (usually three to five) of action items that they want to commit to as a team during the upcoming Sprint.

     
  10. 10.

    From a product perspective, it is important to consider the product life cycle as part of the continuous improvement process. The life cycle of a product is associated with development, marketing, and investment decisions within the business. All the products go through five primary stages: development, introduction, growth, maturity, and decline. These phases are repeatable in an Agile environment within the course of the use of the product.

     
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.134.133