The Waterfall process and predictive planning

The traditional delivery model known as Waterfall was first shown diagrammatically by Dr Winston W. Royce when he captured what was happening in the industry in his paper, Managing the Development of Large Software Systems, Proceedings WesCon, IEEE CS Press,1970.

In it, he describes a gated process that moves in a linear sequence. Each step, such as requirements gathering, analysis or design, has to be completed before handover to the next step. 

It was presented visually in Royce's paper in the following way:

The term Waterfall was coined because of the observation; just like a real waterfall, once you've moved downstream, it's much harder to return upstream. This approach is also known as a gated approach because each phase has to be signed off before you can move onto the next. 

He further observed in his paper that to de-risk this approach, there should be more than one pass through, each iteration improving and building on what was learned in the previous pass through. In this way, you could deal with complexity and uncertainty.

For some reason, not many people in the industry got the memo though. They continued to work in a gated approach but, rather than making multiple passes, expected the project to be complete in just one cycle or iteration.

To control the project, a highly detailed plan would be created, which was used to predict when the various features would be delivered. The predictive nature of this plan was based entirely on the detailed estimates that were drawn up during the planning phase.

This led to multiple points of potential failure within the process, and usually with little time built into the schedule to recover. It felt almost de rigueur that at the end of the project some form of risk assessment would take place before finally deciding to launch with incomplete and inadequate features, often leaving everyone involved in the process stressed and disappointed. 

The waterfall process is a throwback to when software was built more like the way we'd engineer something. It's also been nicknamed faith-driven development because it doesn't deliver anything until the very end of the project. Its risk profile, therefore, looks similar to the following figure:

No wonder all those business folks were nervous. Often their only involvement was at the beginning of Software Development Life Cycle (SDLC) during the requirements phase and then right at the end, during the delivery phase. Talk about a big reveal.

The key point in understanding a plan-driven approach is that scope is often nailed down at the beginning. To then deliver to scope requires precise estimates to determine the budget and resourcing.

The estimation needed for that level of precision is complicated and time-consuming to complete. This leads to more paperwork, more debate, in fact, more of everything. As the process gets bigger, it takes on its own gravity, attracting more things to it that also need to be processed.

The result is a large chunk of work with a very detailed plan of delivery. However, as already discussed, large chunks of work have more uncertainty and more variability, therefore calling into question the ability to give a precision estimate in the first place.

And because so much effort was put into developing the plan, there becomes an irrational attachment to it. Instead of deviating from the plan when new information is uncovered, the project manager tries to control the variance by minimizing or deferring it.

Over time, and depending on the size of the project, this can result in a substantial deviation from reality by the time the software is delivered, as shown in the following diagram:

This led to much disappointment for people who had been waiting many months to receive their new software. The gap in functionality would often cause some serious soul-searching on whether the software could be released in its present state or whether it would need rework first.

No-one wants to waste money, so it was likely that the rollout would go ahead and a series of updates would follow that would hopefully fix the problems. This left the people using the software facing a sometimes unworkable process that would lead them to create a series of workarounds. Some of these would undoubtedly last for the lifetime of the software because they were deemed either too trivial or too difficult to fix.

Either way, a business implementing imperfect software that doesn't quite fit its process is faced with, often undocumented, additional costs as users try to work around the system. 

For those of us who have tried building a large complex project in a predictive, plan-driven way, there's little doubt it often fails to deliver excellent outcomes for our customer. The findings of the Standish Group's annual Chaos Report are a constant reminder, showing that we're still better at delivering small software projects over large projects, and Waterfall or predictive approaches are more likely to result in the project being challenged or deemed a failure regardless of the size.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.25.41