Foreword by Richard Mark Soley: Software Quality Is Still a Problem

Richard Mark Soley, Ph.D., Chairman and Chief Executive Officer, Object Management Group, Lexington, Massachusetts, U.S.A.

Since the dawn of the computing age, software quality has been an issue for developers and end users alike. I have never met a software user—whether mainframe, minicomputer, personal computer, or personal device—who is happy with the level of quality of that device. From requirements definition, to user interface, to likely use case, to errors and failures, software infuriates people every minute of every day.

Worse, software failures have had life-changing effects on people. The well-documented Therac-25 user interface failure literally caused deaths. The initial Ariane-5 rocket launch failure was in software. The Mars Climate Orbiter crash landing was caused by a disagreement between two development teams on measurement units. Banking, trading, and other financial services failures caused by software failures surround us; no one is surprised when systems fail, and the (unfortunately generally correct) assumption is that software was the culprit.

From the point of view of the standardizer and the methodologist, the most difficult thing to accept is the fact that methodologies for software quality improvement are well known. From academic perches as disparate as Carnegie Mellon University and Queen's University (Prof. David Parnas) to Eidgenoessische Techniche Hochschule Zürich (Prof. Bertrand Meyer), detailed research and well-written papers have appeared for decades, detailing how to write better-quality software. The Software Engineering Institute, founded some 30 years ago by the United States Department of Defense, has focused precisely on the problem of developing, delivering, and maintaining better software, through the development, implementation, and assessment of software development methodologies (most importantly the Capability Maturity Model and later updates).

Still, trades go awry, distribution networks falter, companies fail, and energy goes undelivered because of software quality issues. Worse, correctable problems such as common security weaknesses (most infamously the buffer overflow weakness) are written every day into security-sensitive software.

Perhaps methodology isn't the only answer. It's interesting to note that, in manufacturing fields outside of the software realm, there is the concept of acceptance of parts. When Boeing and Airbus build aircraft, they do it with parts built not only by their own internal supply chains, but in great (and increasing) part, by including parts built by others, gathered across international boundaries and composed into large, complex systems. That explains the old saw that aircraft are a million parts, flying in close formation! The reality is that close formation is what keeps us warm and dry, miles above ground; and that close formation comes from parts that fit together well, that work together well, that can be maintained and overhauled together well. And that requires aircraft manufacturers to test the parts when they arrive in the factory and before they are integrated into the airframe. Sure, there's a methodology for building better parts—those methodologies even have well-accepted names, like “total quality management,” “lean manufacturing,” and “Six Sigma.” But those methodologies do not obviate the need to test parts (at least statistically) when they enter the factory.

Quality Testing in Software

Unfortunately, that ethos never made it into the software development field. Although you will find regression testing and unit testing, and specialized unit testing tools like JUnit in the Java world, there has never been a widely accepted practice of software part testing based solely on the (automated) examination of software itself. My own background in the software business included a (non-automated) examination phase (the Multics Review Board quality testing requirement for the inclusion of new code into the Honeywell Multics operating system 35 years ago measurably and visibly increased the overall quality of the Multics code base) showed that examination, even human examination, was of value to both development organizations and systems users. The cost, however, was rather high and has only been considered acceptable for systems with very high failure impacts (for example, in the medical and defense fields).

When Boeing and Airbus test parts, they certainly do some hand inspection, but there is far more automated inspection. After all, one can't see inside the parts without machines like X-rays and NMR machines, and one can't test metal parts to destruction (to determine tensile strength, for example) without automation. That same automation should and must be applied in testing software—increasing the objectivity of acceptance tests, increasing the likelihood that those tests will be applied (due to lower cost), and eventually increasing the quality of the software itself.

Enter Automated Quality Testing

In late 2009, the Object Management Group (OMG) and the Software Engineering Institute (SEI) came together to create the Consortium for IT Software Quality (CISQ). The two groups realized the need to find another approach to increase software quality, since

 Methodologies to increase software process quality (such as CMMI) had had insufficient impact on their own in increasing software quality.

 Software inspection methodologies based on human examination of code is an approach, which tend to be prone to errors, objective, inconsistent, and generally expensive to be widely deployed.

 Existing automated code evaluation systems had no consistent (standardized) set of metrics, resulting in inconsistent results and very limited adoption in the marketplace.

The need for the software development industry to develop and widely adopt automated quality tests was absolutely obvious, and the Consortium immediately set upon a process (based on OMG's broad and deep experience in standardization and SEI's broad and deep experience in assessment) to define automatable software quality standard metrics.

Whither Automatic Software Quality Evaluation?

The first standard that CISQ was able to bring through the OMG process, arriving at the end of 2012, featured a standard, consistent, reliable, and accurate complexity metric for code, in essence an update to the Function Point concept. First defined in 1979, there were five ISO standards for counting function points by 2012, none of which was absolutely reliable and repeatable; that is, individual (human) function counters could come up with different results when counting the same piece of software twice! CISQ's Automatic Function Point (AFP) standard features a fully automatable standard that has absolutely consistent results from one run to the next.

That doesn't sound like much of an accomplishment, until one realizes that one can't compute a defect, error, or other size-dependent metric without an agreed sizing strategy. AFP provides that strategy, and in a consistent, standardized fashion that can be fully automated, making it inexpensive and repeatable.

In particular, how can one measure the quality of a software architecture without a baseline, without a complexity metric? AFP provides that baseline, and further quality metrics under development by CISQ and expected to be standardized this year, provide the yardstick against which to measure software, again in a fully automatable fashion.

Is it simply lines-of-code that are being measured, or in fact entire software designs? Quality is in fact inextricably connected to architecture in several places; not only can poor software coding or modeling quality lead to poor usability and fit-for-purpose; but poor software architecture can lead to a deep mismatch with the requirements that led to the development of the system in the first place.

Architecture Intertwined with Quality

Clearly software quality—in fact, system quality in general—is a fractal concept. Requirements can poorly quantify the needs of a software system; architectures and other artifacts can poorly outline the analysis and design against those requirements; implementation via coding or modeling can poorly execute the design artifacts; testing can poorly exercise an implementation; and even quotidian use can incorrectly take advantage of a well-implemented, well-tested design. Clearly, quality testing must take into account design artifacts as well as those of implementation.

Fortunately, architectural quality methodologies (and indeed quality metrics across the landscape of software development) are active areas of research, with promising approaches. Given my own predilections and the technical focus of OMG over the past 16 years, clearly modeling (of requirements, of design, of analysis, of implementation, and certainly of architecture) must be at the fore, and model- and rule-based approaches to measuring architectures are featured here. But the tome you are holding also includes a wealth of current research and understanding from measuring requirements design against customer needs to usability testing of completed systems. If the software industry—and that's every industry these days—is going to increase not only the underlying but also the perceived level of software quality for our customers, we are going to have to address quality at all levels, and an architectural, holistic view is the only way we'll get there.

30 August 2013

About the Author

Dr. Richard Mark Soley is Chairman and Chief Executive Officer of OMG ®. As Chairman and CEO of OMG, Dr. Soley is responsible for the vision and direction of the world’s largest consortium of its type. Dr. Soley joined the nascentOMG as Technical Director in 1989, leading the development of OMG’s world-leading standardization process and the original CORBA® specification. In 1996, he led the effort to move into vertical market standards (starting with healthcare, finance, telecommunications, and manufacturing) and modeling, leading first to the Unified Modeling Language TM (UML®) and later the Model Driven Architecture® (MDA®). He also led the effort to establish the SOA Consortium in January 2007, leading to the launch of the Business Ecology Initiative (BEI) in 2009. The Initiative focuses on the management imperative to make business more responsive, effective, sustainable, and secure in a complex, networked world through practice areas including Business Design, Business Process Excellence, Intelligent Business, Sustainable Business, and Secure Business. In addition, Dr. Soley is the Executive Director of the Cloud Standards Customer Council, helping end users transition to cloud computing and direct requirements and priorities for cloud standards throughout the industry.

Dr. Soley also serves on numerous industrial, technical, and academic conference program committees and speaks all over the world on issues relevant to standards, the adoption of new technology, and creating successful companies. He is an active angel investor and was involved in the creation of both the Eclipse Foundation and Open Health Tools.

Previously, Dr. Soley was a cofounder and former Chairman/CEO of A. I. Architects, Inc., maker of the 386 HummingBoard and other PC and workstation hardware and software. Prior to that, he consulted for various technology companies and venture firms on matters pertaining to software investment opportunities. Dr. Soley has also consulted for IBM, Motorola, PictureTel, Texas Instruments, Gold Hill Computer, and others. He began his professional life at Honeywell Computer Systems working on the Multics operating system.

A native of Baltimore, Maryland, U.S.A., Dr. Soley holds bachelor, master’s, and doctoral degrees in Computer Science and Engineering from the Massachusetts Institute of Technology.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.9.148