Use Metrics to Understand Quality

Thus far, we have discussed a number of methods for improving your team’s testing processes and overall application quality. We evaluated both process improvements as well as testing tactics for measuring individual feature or application quality. If you were to incorporate some of the tactics mentioned into your existing processes, the quality of your code would certainly be improved. At the end of the day, though, these tactics largely focus on determining whether application testing passes or fails. Of course, automated tests that are passing or failing tell us a lot about the state of quality for that feature. However, there are other metrics we could incorporate that may actually tell us more about the code being built and tested across the team.

Visual Studio provides application development teams with a number of different metrics for understanding the quality of the application code as well as understanding the volatility, complexity, and even the maintainability of the code. These metrics are useful at various stages of the project and, when applied appropriately, can help to focus or prioritize the testing effort for the team. Let’s review the specifics of these metrics.

Measuring Complexity and Maintainability of Code

As software applications grow with complexity, it becomes increasingly difficult to build maintainable and reliable code. While certain resources, such as this book, can help developers to improve the quality of their code, it is fundamentally difficult to evaluate code for both quality and maintainability without testing. In recent versions of Visual Studio, a code metrics feature was added to give developers an early indication of the complexity, or maintainability, of their code. The primary goal of these tools is to provide developers with a perspective on where their code might need additional testing or rework. Developers can use these features by selecting Calculate Code Metrics from the Analyze menu within Visual Studio 2008. These metrics provide insights into the following areas.

  • Maintainability index. This metric is an index value between 0 and 100 that is an indicator for the ease of maintaining the code. The higher the number, the more maintainable the code is thought to be. The calculation is based on a combination of cyclomatic complexity, lines of code, and the Halstead Volume, which is a quantitative measure of complexity determined from the operators and operands in the code. Visual Studio provides green, yellow, and red color-coded ratings to help identify trouble spots within the code.

  • Cyclomatic complexity. This is a measure of the structural complexity of the code. It is determined by calculating the number of different code paths present within the flow of the program. This calculation is determined by evaluating the various do while, for each, if blocks, or switch cases within the program. The result is a number that represents the complexity of control flow within the code. Areas of code that exhibit high cyclomatic complexity should be refactored into smaller, more granular blocks. Code that cannot be refactored should be thoroughly tested.

  • Depth of inheritance. This metric is an indicator of the number of class definitions extended to the base class of the hierarchy. The depth of the hierarchy is an indicator of how difficult it could be to understand where particular methods or properties are defined or refined.

  • Class coupling. Measures the coupling of unique classes to parameters, variables, return types, method calls, interface implementations, generic or template instantiations, base classes, fields defined on external types, and attribute decoration. Good design suggests that classes and methods have low coupling and high cohesion. The opposite, high coupling and low cohesion, is an indicator of a design that is difficult to reuse due to the number of dependencies.

  • Lines of code. This measure is an approximation of the number of lines of Intermediate Language (IL) code. Because the measurement is based on IL, it is not an exact match against source code. This metric is unfortunately not very suggestive of a particular problem and may just be indicating that methods with a large number of lines be refactored into smaller parts.

When teams combine the use of these metrics with automated testing, they can quickly assess the risk areas of the application code and structure or prioritize their testing accordingly. For example, if the code being developed for a specific feature has a high cyclomatic complexity, the application developer should ensure that he or she has adequate unit tests to cover the variations in the logic effectively, assuming that the logic cannot be refactored. Additionally, the feature tester should make certain that the test plans or strategy prioritizes this area of the feature. While code metrics in and of themselves are interesting, it is more important to understand what they tell us about our application code, so we can take proper actions on them. They are great to apply at the individual feature level during the coding portion of the development life cycle and perhaps even early in the testing phase since they can help steer the direction of testing. Application developers and testers should leverage these tools as a means to understand more about the application code and validate that their approach to testing the feature or code is most effective.

Using Perspectives to Understand Quality

In the same way that code metrics are useful in understanding complexity and maintainability of application code at a feature level, there are other metrics that can be leveraged across the entire code base as a means to understand how application quality is trending. Because projects are very fluid with frequent feature check-ins, daily builds, and various levels of testing happening in real time, it is often difficult for teams to understand the state of the application quality. Fortunately, Team Foundation Server (TFS) provides insights into team progress and quality through data reporting, which are available within the analysis features of TFS. Let’s enumerate some of the available metrics and discuss the value they provide for project teams.

  • Build perspective. This represents a set of metrics focused around the application builds. It can be used to analyze various dimensions of build data such as the number of builds over time, the outcome of the build, who performed the build, and when the build was performed. This helps to understand how often builds are successful, how well teams are achieving their build quality goals, and whether the established build processes are effective.

  • Code churn perspective. This provides metrics about the rate of change in the code in terms of how many files were changed and the number of lines changed, added, or deleted. This data can be analyzed a number of ways but is very effective when analyzed in the context of specific builds. This can help teams, especially testers, identify the scope of code change between builds, which can help focus the testing efforts quite effectively. Also, it presents a good indicator of code volatility within specific areas of the code, which could help to identify certain risk areas that may require additional testing.

  • Code coverage perspective. This provides the metrics about how many lines of code or blocks have been tested across test runs. These metrics can also be very helpful in evaluating how effective the automated test passes have been and help further guide test efforts.

  • Load test perspective. As described in the previous section, load testing is a great way to measure the performance of your application under stress and profile the effect that the application has on system resources. The load test perspective allows developers and testers to evaluate the results of load testing across multiple test runs. This allows teams to trend the results over time to ensure that comparable application builds or releases have not regressed in quality.

  • Test results perspective. Similar to the load test perspective, this set of results allows teams to track and trend the results of their test efforts across multiple application builds or releases. The data can be analyzed by test outcome, the specific build, the type of test, or other test dimensions. This data is very useful in showing how feature testing is progressing, as well as illustrating the completeness versus incompleteness of the testing effort.

  • Current work item perspective. This provides application development teams with analysis of the current work items and their respective statuses within Team Foundation Server. Despite this being largely used to understand overall project progress, it can be very useful in evaluating bug metrics and tracking progress on closing and resolving issues in the application.

Although this is not the complete representation of the perspectives metrics provided by TFS, it does represent the set that are most useful to the testing initiative. These metrics and reports are geared toward helping teams to evaluate their progress relative to the overall project, but especially with respect to testing. Incorporating the use of these tools within the testing process provides actionable data to development teams so that they can proactively manage the testing effort and ensure they are achieving their quality and testing goals. If these metrics do not precisely meet the requirements of your team, TFS makes it very simple to generate your own metrics by simply connecting to the TFS data warehouse with tools like Microsoft Excel. When development teams recognize the value that this data provides, they will very likely revisit their existing work-tracking processes and augment them to further increase the effectiveness of the resultant data. This is a testament to the flexibility and value that TFS provides application development teams and how, as an end-to-end toolset, it can help teams to manage and improve the quality of application code.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.88.62