Chapter 3. Make Definition of Done Explicit

I have the simplest tastes. I am always satisfied with the best.

Oscar Wilde

Best Practice:

  • Make a Definition of Done, listing all items necessary to reach your development goals.

  • Ensure that the Definition of Done is measurable.

  • This improves the quality of development because you can track the progress of the product.

Before you can manage something, you need information about status and change. Are we reaching our development goals? Well, that depends on the status and on what your goals are.

The Definition of Done (hereafter DoD) is a term specific to the Scrum approach of Agile development. It can be an organizational standard or it is made by the development team. The DoD defines when a product is “done”: everything that needs to be done to produce software in a releasable (or shippable) state to end users. It does not guarantee user acceptance or success in production, but does provide a check toward achieving nonfunctional requirements (such as performance, security, reliability, or other characteristics as defined by the ISO 25010 standard). Because user acceptance criteria are defined by the owner of the system, they are complementary to the DoD. Think of the DoD as a list of the software development activities that add value to the end product, though those activities are not necessarily visible to the user. For example: a minimum amount of unit test coverage, adherence to coding standards, and documentation requirements.

Note

The DoD may change over time, but should only do so in between sprints and only in agreement with the party that is responsible for the product.

Motivation

This section describes the advantages of using a DoD. Defining an end state in a DoD helps you to manage achieving that end state. Because the DoD concerns non-functionals, it puts focus on software quality.

With a DoD You Can Track Progress and Prove Success

In general, a DoD is useful for helping you to manage the nonfunctional aspects of software development. The reason is simple. Knowing the end goal and knowing the current status allows you to see whether you are on track and what needs to be done. This applies to different roles in the organization: developers can assess when their work is done. A project manager can assess whether software quality meets expectations. Acceptance testers can verify nonfunctionals such as performance.

A DoD Focuses on Software Quality

Because a DoD defines when a product is ready from the developer’s perspective, it is an aid in assuring the quality level of your software. Consider that when software quality is not defined, it is hard to manage whether software implementations adhere to your quality standards. However, in practice we often see a lack of software quality requirements.

When checking off a DoD, it can be confirmed that development is actually done. Done then means that implementation has finished, including coded functionality and nonfunctional work such as unit tests, coding standards, or documentation.

How to Apply the Best Practice

In order to know both the current situation and assess the end situation, a DoD should comply to the following:

It is assessable

Preferably its items are quantified as a metric.

It clarifies its own scope

A DoD normally applies to all development within a sprint but may be specific for a feature. Also, what is “done” is not always defined in the same way. If responsibilities for development, testing, and deployment are clearly separated (e.g., they are different departments), then “done” for the development team may mean “development work is done.” If a team mixes these specialties in one team with a shared responsibility (such as in a DevOps team), an item may be considered “done” when it is accepted and ready for release, and pushed from an acceptance version into production. In any case, the DoD should define an end state within the scope of responsibilities of the team.

In Chapter 2 we showed an example of a norm for a code complexity measure. Extending it, a specific technical requirement for the DoD may be:

In all development for this system, at least 75% of code volume has a McCabe complexity of at most 5. For newly written units in development, at least 75% of units comply.

A DoD typically includes many different requirements that may be as concise as the team wants it to be. Several topics tend to reoccur in DoD lists. Consider the following (nonexhaustive) elements of a DoD:

Version control
  • All source code is committed and merged into the main development line (trunk) or a specified branch.

  • Code commits include a code/identifier in a specific format for functionality/issue identifier.

  • Commits build without errors.

Proof that code works as intended
  • There are no failing unit, integration, and regression tests.

  • During code maintenance, corresponding unit tests, integration tests, and regression tests (if applicable) are adjusted accordingly and their code is committed to version control.

  • Unit code coverage for a specific task is at least 80% (as a rule of thumb) as defined by number of lines of code affected by unit tests.

Administration for maintenance and planning
  • Documentation requirements (dependent on team agreements): code should be mostly self-documenting, contain comments sparingly yet always in a specified format (date-author-issue ID-comment). A separate document describes overall system structure in no more than five pages.

  • No code comments are left behind that signal work to be done (i.e., ensure that TODOs are addressed and removed).

  • The user requirements (issue/ticket/requirement/user story, etc.) are closed in a traceable way (e.g., in an issue tracking system and/or on the kanban board).

Wrap-up to complete sprint
  • New acceptance test cases are created for added functionality.

  • A sprint is formally finished when a stakeholder signs off the sprint after a demonstration of functionality (“sprint review”).

Common Objections to Using Definition of Done

This section discusses objections regarding usage of DoD. The most common objections are about the effort required in using a DoD and maintaining it, and the wrong incentives it may give to the team.

Objection: DoD Is Too Much Overhead

“Using a DoD is too much overhead.”

A DoD is not a bureaucratic goal: it is an agreement within the team that provides transparency and a way of managing software quality. By extension this is also an agreement with the business owner.

From practice we know that quality must be defined in advance. When quality is not defined at all, it is the first thing that suffers when deadlines need to be met! Shortcuts will be taken in the form of code hacks, workarounds, and concessions in testing.

Important

Quality must not be left to chance. Even if it is not called a “Definition of Done,” a team should agree on quality expectations.

If a team is very skilled and quality-conscious, the DoD is not a big deal: it will simply be a representation of the way of working. If a team is more junior and not yet quality-conscious, it will help them develop better quality software by focusing on what is important.

The DoD’s level of detail and its application is up to the team. A team might especially favor a summary definition, or apply the DoD as a quality agreement that is only assessed at the end of each sprint. A check at the end of each sprint is a minimum requirement, though; otherwise, the team cannot guarantee quality toward the system stakeholders.

Objection: DoD Makes the Team Feel Less Responsible

“Formalizing what ‘done’ means makes the team feel less responsible for overall quality because they just adhere to the DoD without thinking about actual improvement.”

Consider that the DoD itself defines an end state of work that you are doing right now, but implies nothing about quality improvement. If a certain (quality) characteristic appears to be lacking (e.g., performance), the DoD should be written in a way to facilitate that. Typically such requirements are nonfunctional in nature.

Also, remember that the DoD is only as fixed as you want it to be (typically fixed for a sprint at least, though). Change its content or its application together with the team if you feel that the DoD does not provide the right incentive to achieve high-quality software. A DoD should be a good representation of which quality aspects are valued by the team, and to which standards they hold each other’s work.

Objection: With DoD There Is No Room for Technical Maintenance

“With the DoD there is no room for technical maintenance such as refactoring or upgrading frameworks.”

In fact, there is. If “technical maintenance” here means, for example, simplicity of code units, then that could be part of the DoD. If the technical maintenance has a broader scope, like renewal of frameworks, the team should put it on the development backlog and it can then be treated as “regular functionality.” Putting technical maintenance on the backlog is an effective custom. It forces the team to reason why the improvements are advantageous for the system’s operation or maintainability.

You can imagine that such technical maintenance typically is not foreseen by stakeholders and therefore it may not receive priority from a user perspective (because the improvements are invisible). Discuss with the team how they prefer to deal with this. For example, one can agree that a certain percentage of time is reserved for technical maintenance (say, 10%). Within the 90% remainder, user stories can be prioritized. Such a fixed effort can be agreed on with the stakeholders.

This should not relieve developers from having a critical eye toward low-quality code when they see it. For that matter, the “Boy Scout Rule” is a great principle. For developers this means that the right opportunity to refactor code is when modifying that code. The result is that code is left behind “cleaner” and of higher quality compared to when the developer found it.

Objection: Changing the DoD May Mean Extra Work

“When the DoD changes, should we revisit all earlier sprints for compliance with the updated requirements?”

The simple answer is: only when the change in DoD is important enough to warrant this. Changes in the DoD affect only what is being delivered from now on. The effects of changes should be considered for the next sprint planning. If the DoD becomes more strict or more specific, the team should agree with the system stakeholder whether the change applies in retrospect. If so, it should be put on the backlog, because reanalyzing code takes time. Then in the sprint planning it will emerge whether applying the changed DoD over older pieces of code has enough priority.

You can assess whether your DoD standards are being met by measuring them. Typically this is done with some kind of quality dashboards that include code quality measurements (such as unit test coverage, coding standards violations, etc.). So clearly, the more specific and quantified a DoD is, the easier it will be to measure them. Remember to walk through the GQM process to determine what kind of metrics help you toward your goals.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.170.223