Chapter 6. Measurement Metrics for Tool Quality

 

There is an old saying with software that three years from now, no one will remember if you shipped an awesome software release a few months late. What customers will still remember three years from now is if you shipped a software release that wasn’t ready a few months too soon. It takes multiple product releases to change people’s quality perception about one bad release.

 
 --Scott Guthrie

The risk of failure for software development is increasing at a rapid rate because of the need for higher quality software that is also more cost effective and delivered in a timely manner. With the growing focus on quality, there is a definite need to improve the quality of software to meet the needs of the industry. One common problem when trying to determine how to improve quality is establishing a meaningful way to measure quality so that you can quantify your results. If a developer told you that a piece of software was top-notch quality, just what does that mean? If a developer told you that a piece of software has only failed twice in over three years of usage, there would be more value behind that statement. The only difference between the two statements is that the second one presents a quantifiable measurement detailing the number of times the software failed in a three-year period. Both statements could be referring to the same piece of software, yet the second statement is the only one that is an acceptable and accurate description of software quality.

When performing any kind of measurement, you need what is known as a metric, which is commonly defined as a quantitative measure of the degree to which a system, component, or process possesses a given attribute. Software development quality can be measured by a number of metrics, including maintainability, performance, usability, testability, portability, reliability, and efficiency.

The International Standards Organization (ISO) has created a set of software quality standards and also describes how to collect metrics for them. The metrics discussed in this topic are a compressed overview of their work.

Tools, like any software project, require a high level of quality, especially when the tools produce game content or enhance workflow, and the rate of failure for the tool must be extremely low. This topic presents some measurement metrics and concepts for development that all greatly impact the lifetime cost of a tool.

Metric: Maintainability

Perhaps one of the most important metrics to consider in software development, and definitely evangelized in this book, is maintainability, which characterizes any successful tool. The greatest amount of development time in the game industry is spent on maintenance, by extending or enhancing a product that already exists. A tool should always be designed with maintainability in mind, designed so that the code is easy to repair and extend for future products or processes.

This metric typically looks at how many times a certain tool has been reused across multiple products or processes, how much additional time was needed to relearn the inner workings of the code, and how much development time was spent enhancing the tool to suits the needs of another product.

Metric: Traceability

The idea of traceability has been mainly introduced by object-oriented software engineering, and is the idea that documentation should be able to show why a particular implementation decision was made. Typically, a tool, especially one that’s medium to large scale in terms of size, will have a design document detailing how the application will function, and may even be represented using the Unified Modeling Language (UML). The ability to look at a functionality requirement in a design document, known as use cases when utilizing UML, and easily understand how to perform that task in the application itself is referred to as traceability.

There are a multitude of ways to discuss traceability and how to achieve it, but basically it all boils down to how well the application and underlying architecture follow the design document specifications. Actors in a design document, the people using a certain component in the system, should be easily identifiable in the object model, and all functions should be named similarly to the associated use cases. For example, if the design document specifies that there is a feature called Search Entities and its associated code function is labeled FindEntityList, the traceability between the documentation and code is low because further investigation is needed to make sure that function performs the correct task. If the function was labeled SearchEntities, the traceability between the documentation and code would be better.

Metric: Performance

Generally, one of the most difficult areas of any software product of ample complexity is performance profiling and tuning. Performance describes issues like memory leaks or how responsive the user interface is.

This metric typically profiles the application for declines in performance or misuse of resources. Performance is very important to game tools because a responsive user interface yields much more productivity than a tool with a sluggish user interface.

Some chapters later on in the book cover performance, such as accessing performance counters to profile operations and optimization tips and tricks for the .NET platform.

The performance metric is sometimes combined with the efficiency metric in some measurement contexts.

Metric: Usability

Another important issue in regards to software development is how easy it is to reuse or extend a piece of software. In order to accomplish this, it is important that the interfaces for the software are well-documented and easy to use.

A developer should be able to read the documentation for the tool and understand what the tool is supposed to do at a high level. Additionally, a developer should be able to read the source code and easily understand what is going on behind the scenes.

The usability metric is sometimes combined with the maintainability metric in some measurement contexts.

Metric: Testability

Testing is a required step in any software project, and there are certain considerations for building software that is easy to test. Unit testing is easiest to perform in loosely coupled architectures where individual objects can be tested with minimal dependency on other objects. If testing can be performed on components in isolation from each other, there is a much greater chance that performance issues and hard-to-find bugs will be discovered.

Avoid design patterns like the singleton, where architectures become tightly coupled; design software for testability so that the work of testers is not as difficult and can be done in a much shorter period of time.

Metric: Portability

The portability metric involves moving software from one operating system to another. Some game development studios target multiple operating systems and platforms with their products, so portability is important to them. Therefore, it is important to build common components that are easily portable to other platforms. Even if the game development studio typically relies on outsourcing other cross-platform work to another development company, there are some practices that should be followed. The longer it takes to port the original code to another platform, the greater the overall cost of the conversion process. The more a software component relies on platform-specific technology, the more code must be written in the porting process.

The biggest practice to follow is that all calls to the operating system should be in specific components. Abstraction is very useful in this situation, because interfaces can be written that define how a particular component will communicate with the system, and operating system-specific components can be written that implement that interface, creating a flexible plugin-based architecture.

Plugin-based architectures are commonly used with 3D API agnostic graphic engines that can use either OpenGL or Direct3D. Aside from the benefits of an abstracted rendering system on Windows alone, OpenGL is pretty much the only cross-platform hardware-accelerated 3D API that can be employed in games. By using an abstracted rendering system that supports OpenGL, you do not have to worry about porting the graphic engine to other platforms, as you have already accounted for the differences.

Operating system agnostic design can also be used for other hardware-based services like audio, video, input, and networking.

Metric: Reliability

An extremely important factor in the success of any software project is its reliability. A tool is pretty useless to designers if it crashes or corrupts the data almost every time it is used. The reliability metric is a measure of failure rates surrounding the software project. If you run a certain tool a thousand times, what percentage of those times will it fail? The resulting data from this test is generally referred to as the meantime to failure.

There are different acceptable failure rates for different stages in software development. At the beginning of development, the software fails quite often. As development progresses, bugs are removed, and the failure rate declines to the point where the tool rarely fails. The failure rate is rare when the software is ready for integration and deployment, at which point the failure rate is said to be acceptable.

Workflow productivity using a tool is directly tied to reliability. Losing work or requiring tedious workarounds to maintain stability is a frustrating process, and should be minimized at all costs. Spending the extra time to stabilize a tool can save the designers much more time in the long run.

Metric: Efficiency

Judging the efficiency of an application is relatively difficult to do, because there are several things you must take into consideration. Some measurement contexts also combine the efficiency metric with the performance metric, while others do not.

Some measurements of efficiency include the size of the application, especially in circumstances where available disk space is limited, such as handheld or other resource-limited platforms. Smaller applications typically gain a slight performance boost over larger applications, due to how the operating system manages memory associated with processes.

The amount of memory required by the application to function optimally is also important to measure, especially in situations where memory is limited. If you had an application that performed a task in four seconds with 1MB of memory, it would be more efficient compared to an application that performed a task in two seconds with 9MB of memory.

The speed of an algorithm can also be measured in terms of efficiency. An algorithm can be evaluated in terms of the time it takes to complete its work, and how it goes about doing that work. Issues like memory access, disk access, and network access can all be considered in this measurement.

Aside from efficiency or performance, complexity of the implementation relative to the task performed can also be considered. If an application or component is mired in complexity, it might not be the most efficient implementation of a solution, even if its performance is as good as or better than another less complex solution.

The efficiency metric involves studying several important variables in order to determine whether the solution, even when meeting business objectives, is an efficient implementation.

Conclusion

In this chapter, I discussed what software quality measurements and metrics are, and why they are important. Also discussed were some development models and calculation methods used to produce and analyze high-quality software.

Note

For more information, refer to the book Metrics and Models in Software Quality Engineering, Second Edition by Stephen H. Kan.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.111.179