Quality assurance

The Java artifact build already performs some basic quality assurance. It executes included code level tests, such as unit tests. A reasonable pipeline consists of several test scopes and scenarios, all with slightly different strengths and weaknesses. The included unit tests operate at code level and can be executed without any further running environment. They aim to verify the behavior of the individual classes and components and provide fast feedback in case of test failures. We will see in the next chapter that unit tests need to run self-sufficiently and fast.

Test results are usually recorded from the CI server for visibility and monitoring reasons. Making the outcome of the pipeline steps visible is an important aspect of Continuous Delivery. The CI server can track the number of passed unit tests and show trends over time.

There are build system plugins available that track the code coverage of the executed tests. The coverage shows which parts of the code base have been executed during the test runs. Generally speaking, a greater code coverage is desirable. However, a high percentage of coverage alone tells nothing about the quality of tests and coverage of test assertions. The test results, together with their coverage, are just one of a few quality characteristics.

Source code can already provide a lot of information about the software's quality. So-called static code analysis performs certain quality checks on the static source code files of the project without executing them. This analysis gathers information about code statements, class and method sizes, dependencies between classes and packages, and complexity of methods. Static code analysis can already find potential errors in the source code, such as resources that are not properly closed.

SonarQube is one of the most well-known code quality tools. It provides information about the quality of software projects by correlating the results of different analysis methods, such as static code analysis or test coverage. The merged information is used to provide helpful quality metrics for software engineers and architects. For example, which methods are complex but at the same time sufficiently tested? Which components and classes are the biggest in size and complexity and therefore candidates to be refactored? Which packages have cyclic dependencies and likely contain components that should be merged together? How does the test coverage evolve over time? How many code analysis warnings and errors are there and how does this number evolve over time?

It's advisable to follow some basic guidelines regarding static code analysis. Some metrics just give insights in terms of rough ideas about the software quality. Test coverage is such an example. A project with high coverage does not necessarily imply well-tested software; the assertion statements could be impractical or insufficient. However, the trend of test coverage does give an idea about the quality, for example, whether software tests are added for new and existing functionality and bug fixes.

There are also metrics that should be strictly followed. Code analysis warnings and errors are one of these. Warnings and errors tell engineers about code style and quality violations. They are indicators about issues that need to be fixed.

First of all, there should be no such things as compilation or analysis warnings. Either the build passes the quality checks sufficiently, a green traffic light; or the quality is not sufficient for deployment, a red traffic light. There is nothing reasonable in between. Software teams need to clarify which issues are plausible and to be resolved and which aren't. Warnings that indicate minor issues in the project therefore are treated as errors; if there is a good reason to resolve them, then the engineers have to, otherwise the build should fail. If the detected error or warning represents a false positive, it won't be resolved; instead, it has to be ignored by the process. In that case, the build is successful.

Following this approach enables a zero-warning policy. Project builds and analyses that contain a lot of errors and warnings all the time, even if they are not critical, introduce certain issues. The existing warnings and errors obfuscate the quality view of the project. Engineers won't be able to tell on the first look whether the hundreds of issues are actually issues or not. Besides that, having a lot of issues already demotivates engineers to fix newly introduced warnings at all. For example, imagine a house that is in a terrible condition, with damaged walls and broken windows. Nobody would care if another window gets broken or not. But a recently broken window of an otherwise pristine house that has been taken good care of urges the person in charge to take action. The same is true for software quality checks. If there are hundreds of warnings already, nobody cares about that last commit's newly introduced violation. Therefore, the number of project quality violation should be zero. Errors in builds or code analyses should break the pipeline build. Either the project code needs to be fixed or the quality rules need to be adjusted for the issue to be resolved.

Code quality tools such as SonarQube are integrated in a build pipeline step. Since the quality analysis operates on static input only, the step can easily be parallelized to the next pipeline steps. If the quality gate does not accept the result, the build will fail and the engineers need resolve the issue before continuing development. This is an important aspect to integrate quality into the pipeline. The analysis should not only give insights but also actively prevent the execution to force action.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.108.119