24 Software Reuse

Acronym

AC Advisory Circular
CAST Certification Authorities Software Team
COTS commercial off-the-shelf
FAA Federal Aviation Administration
PDS previously developed software
PSAC Plan for Software Aspects of Certification
RSC reusable software component
RTOS real-time operating system
SOUP software of unknown pedigree
U.S. United States

24.1 Introduction

Software reuse is an important subject because the majority of software projects, at least in the aviation world, are derivatives of existing systems. New or clean-sheet avionics, electrical systems, airframes, and engines are actually rather rare. DO-178C encourages a well organized and disciplined process that not only supports the initial certification and compliance with the regulations but also supports ongoing modifications and maintenance.

It is also important to discuss software reuse because there have been some rather well-publicized consequences of unsuccessful software reuse. The explosion of the Ariane Five rocket is one example. The software used on the Ariane Five was originally intended for the Ariane Four and worked properly on that platform. However, the launch characteristics of the Ariane Four and Five rockets were different. Improper reuse of the Ariane Four software caused the Ariane Five to explode [1]. In her book Safeware, Nancy Leveson explains that there is a myth that software reuse increases safety. She provides a number of examples of safety-related problems that arose from software reuse (three of which follow). First, the Therac-25 medical device reused parts from its predecessor, the Therac-20. An error existed in the Therac-20 software, but it had no serious consequences on the Therac-20 operation, except an occasional blown fuse. Unfortunately, when utilized on the Therac-25, the error led to massive radiation overdoses and the death of at least two people. Software reuse was not the sole reason for the Therac-25 problem, but it was a significant contributing factor. Second, air traffic control software used in the United States (U.S.) was reused in Great Britain. In this case, the British users failed to account for the longitudinal differences between the U.S. and Great Britain. Third, problems arose when software written for an aircraft in the northern hemisphere and above sea level was reused in the southern hemisphere or below sea level [2].

Software must be reused with extreme caution. This chapter examines reuse by discussing previously developed software (PDS) and commercial off-theshelf (COTS) software. The terms reuse, PDS, COTS software, and software component are interrelated. COTS is a subset of PDS, and reuse implements PDS components. Perhaps some definitions will clarify the relationship:

  • Software reuse: There are greatly ranging opinions on and definitions of software reuse. I prefer to look at it as a process of implementing or updating systems using existing software assets. Assets may be software components, software requirements, software design, source code, and other software life cycle data (including plans, standards, verification cases and procedures, and tool qualification data). Software reuse may occur within an existing system, across similar systems, or in widely differing systems.

  • Software component: DO-178C defines component as: “A self-contained part, combination of parts, sub-assemblies, or units that perform a distinct function of a system” [3]. However, I prefer the following, more descriptive definition: “an atomic software element that can be reused or used in conjunction with other components; ideally, it should work without modification and without the engineer needing to know the content and internal function of the component. However, the interface, functionality, pre-conditions and post-conditions, performance characteristics, and required supporting elements must be well known” [4].

  • COTS software: Commercially available software components that are not intended to be customized or enhanced by the user, although they may be configured for user-specific needs [3].

  • PDS: “Software already developed for use. This encompasses a wide range of software, including COTS software through software developed to previous or current software guidance” [3].

Together, these definitions indicate that software reuse often occurs using PDS that is packaged as a component. The PDS may be commercially available (COTS software), developed in-house from a past project, developed using DO-178[ ],* developed using some other guidance or standard (e.g., a military or automotive standard), or developed using no guidance at all. Table 24.1 provides some examples of PDS considering four categories. Some PDS is utilized without change; while some PDS requires modification for use in the new system or environment.

Table 24.1 Examples of Previously Developed Software

Non-COTS COTS
DO-178[ ]
  • • An avionics application developed to DO-178A.

  • DO-178B-compliant flight control software to be modified for use in a similar system.

  • DO-178C-compliant battery management software to be installed in a new aircraft.

  • A real-time operating system (RTOS) with DO-178B or DO-178C data available.

  • A DO-178C-compliant board support package for a specific RTOS and microprocessor.

Non-DO-178[ ]
  • An electrical power system developed using U.S. Department of Defense Military Standard 498.

  • Flight management system software for military aircraft developed using the United Kingdom’s Defence Standard 00-55.

  • Brake system software for an automobile.

  • Device driver for a controller area network databus.

  • An operating system without DO-178[ ] data (e.g., Windows).

  • A compiler-supplied library.

  • Databus software used in automotive market.

  • Communication stack compliant with Open Systems International communication protocol.

In order for software to be truly reused (rather than salvaged), it must be designed to be reusable. While this is not required per any aviation standard, it is a practical reality and a best practice. Therefore, this chapter discusses how to plan and design for reuse. The focus then shifts to examining how to reuse PDS. A brief survey of software service history, which is closely related to PDS, concludes the chapter.

24.2 Designing Reusable Components

Software reuse doesn’t just happen. A successful reuse effort requires planning and careful design decisions. In order for components to be successfully reused, they must be designed to be reusable. Designing for reuse normally increases the initial development time and cost, but the return on investment is recovered when the component is reused without significant rework.

The Ariane Five, Therac-25, and other previously discussed examples provide a brief examination of cases where software reuse was not carefully evaluated and implemented. As software becomes more complex and more widely used, the concerns of software reuse in safety-critical systems also increase. Reuse can be a viable option; however, it must be evaluated and implemented with caution. If software is designed to be reusable in the first place, it can help avoid such incidents.

In general, the software industry, particularly the aviation software industry, is rather immature when it comes to developing for reuse. In the 10 years since I completed my master’s thesis on the subject of reuse and arriving at the same conclusion, there have been some advances in the reuse arena for components like real-time operating systems and library functions. Unfortunately, designing for reuse still seems to be a rather elusive goal in the aviation industry. Frequently, a project starts out with the intent to design for reuse. However, once schedule and budget constraints are imposed, the reuse goals are abandoned.

Following are 16 recommendations to consider when designing for reuse. These interrelated recommendations summarize vital programmatic and technical concepts for effective reuse.

Recommendation 1: Secure management’s long-term commitment to reuse. Designing a component to be reusable will take longer and cost more than a component that is not designed for reuse. The magnitude of the schedule and cost increase varies depending on the processes used, the organization’s experience and domain expertise, the type of component being developed, management’s commitment, and a variety of other factors. Some estimate that a reusable component costs two to three times as much as the same component for onetime use [5]. It is important for management to understand this reality and to make the long-term commitment. For reuse to be successful, upper-level management must champion it. Most reuse failures come from management’s lack of commitment to support the effort when the going gets tough.

Recommendation 2: Establish and train a reuse team. A well-trained team with the primary objective of developing reusable software is more successful than a development-as-usual team. It may be a small dedicated team, or it may be composed of people who only contribute part of their time to the effort. However, to ensure accountability, it is important to have an identified team and to ensure that all members are properly trained.

Recommendation 3: Evaluate existing projects to identify issues that prevent reuse. It is beneficial to evaluate existing software projects within your company to identify what issues prevent reuse. Design practices, coding practices, hardware interfaces, development environment, and compiler issues may prevent reuse. By compiling a list of issues that prevent reuse, the team can begin to develop strategies to overcome these issues.

Recommendation 4: Try a pilot project. Rather than trying to change the organization overnight, it is often best to start with a small pilot project. That project can be the basis for identifying reuse practices and training the reuse team. A small, successful project builds experience and confidence.

Recommendation 5: Document reuse practices and lessons-learned. Before launching into a reusable component development effort, draft practices for the team to follow. After the first project or two, those practices should be updated. Ideally, practices and procedures are continually refined as lessons are learned from real-life experience. Eventually, the practices can be established as company-wide recommendations or procedures.

Recommendation 6: Identify intended users during the component development and support them throughout. Since the users are the component customers, it’s important to identify who the component users are in order to ensure that their needs are met, any conflicting needs are identified, and that they are informed of any problems that may arise when developing and maintaining the component. It is also important to design the component to be user friendly, which includes such characteristics as (1) easy identification (users should be able to easily see if the component meets their needs), (2) ease of use (users should be able to quickly learn how to use the component), and (3) usability by integrators with a wide range of experience (novices to experts) [6].

Recommendation 7: Implement a domain engineering process. A sound domain engineering process is essential to successful software reuse. A domain is “a group or family of related systems. All systems in that domain share a set of capabilities and/or data” [7]. Domain engineering is the process of creating assets through domain analysis, design, and implementation that can be managed and reused. The domain engineer suggests an architecture that meets the majority of the application needs and is suitable for future reuse. It is important to distinguish the concepts of domain engineering and reuse engineering. Domain engineering is developing for reuse; whereas reuse engineering is developing with reuse. First-rate domain engineering is critical to being able to reuse entire components or design portions of those components, and not just salvage code.

Recommendation 8: Identify and document the usage domain. A usage domain is a declared set of characteristics for which the following can be shown:

  • The component is compliant to its functional, performance, and safety requirements.

  • The component meets all the assertions and guarantees regarding its defined allocateable resources and capabilities.

  • The component performance is fully characterized, including fault and error handling, failure modes, and behavior during adverse environmental effects [8].

It is important to document the usage domain because when reusing the component, a usage domain analysis will need to be performed to evalu ate the component reuse in the new installation. The usage domain analysis evaluates the subsequent reuse of the component to ensure that (1) any assumptions made by the component developer are addressed, (2) the impact of the component on the installation is considered, (3) the impact of the new environment on the component is considered, and (4) any interfaces with the component in the new installation are consistent with the component’s usage domain and interface specification [8].

Recommendation 9: Create small, well-defined components. Steve McConnell writes: “You might be better off focusing your reuse work on the creation of small, sharp, specialized components rather than on large, bulky, general components. Developers who try to create reusable software by creating general components rarely anticipate future users’ needs adequately… ‘Large and bulky’ means ‘too hard to understand,’ and that means ‘too error-prone to use’ “ [5]. In order for a component to be reusable, its functionality must be clear and well documented. The functionality is defined by a high-level what the component does—not how it is implemented. The functionality should have a single purpose, only provide functions related to its purpose, and be properly sized (i.e., it is not too small for use and not too large to become unmanageable). Additionally, in order to allow smooth and successful integration, a software component must have a well-defined interface. An interface defines how the user interacts with the component. A successful interface has consistent syntax, logical design, predictable behavior, and a consistent method of error handling. A well-defined interface is complete, consistent, and cohesive—it provides what the user needs to make the component work [9].

Recommendation 10: Design with portability in mind. Portability is a desirable attribute for most software products because it enhances the value of a software package both by extending its useful life and by expanding the range of installations in which it can be used [10]. There are two types of portability: binary portability (porting the executable form) and source portability (porting the source language representation). Binary portability is clearly desirable, but is usually possible only across strongly similar processors (e.g., same binary instruction set) or if a binary translator is available. Source portability assumes availability of source code, but provides opportunities to adapt a software unit to a wide range of environments [11]. In order to obtain binary or source portability, the software must be designed for portability. In general, incorporating portability calls for design strategies such as the following [10]:

  1. Identify the minimum needed environmental requirements and assumptions.

  2. Eliminate unnecessary assumptions throughout the design.

  3. Identify required interfaces specific to the environment (such as procedure calls, parameters, and data structures).

  4. For each interface, do one of the following:

    1. Encapsulate the interface in a suitable module, package, object, etc. (i.e., anticipate the need to adapt the interface for each target system).

    2. Identify a standard for the interface, which will be available in most target environments. And, follow this standard throughout the design.

Recommendation 11: Design the component robustly. Bertrand Meyer writes: “The component must not fail, when it is used properly” [6]. Since the component will have multiple users, it must be designed robustly. The component should anticipate unexpected inputs and address them (e.g., using error handling capabilities). Robustness must be considered when developing component requirements and design.

Recommendation 12: Package the data to be reusable. Federal Aviation Administration (FAA) Order 8110.49 chapter 12 discusses the reuse of software life cycle data in the aircraft certification environment [18]. In order for data (and the software itself) to be reusable, the software needs to be packaged for reuse. Often, this means having a full set of DO-178C life cycle data for each component. Order 8110.49 provides guidelines for reuse within a company; however, FAA Advisory Circular (AC) 20-148 discusses the concept of software component reuse across company boundaries (e.g., reuse of an RTOS on multiple avionics systems). AC 20-148 provides guidance for how to document a component to be reusable. Whether seeking an FAA reuse acceptance letter or not, the component can still be packed to be reusable from program to program. The Order and AC provide packaging suggestions.

Recommendation 13: Document the reusable component well. Since it is often unknown who will use the component in the future, it is important to thoroughly document the component. This involves creating documentation to ensure proper use and integration of the component (e.g., interface specification and user’s manual), as well as data to support certification and maintenance. AC 20-148 requires the creation of a data sheet that explains information needed by the user of the reusable software component (RSC) in order to ensure proper usage of the component. AC 20-148 (section 6.i) requires the following data in the data sheet: component functions, limitations, analysis of potential interface safety concerns, assumptions, configuration, supporting data, open problem reports, characteristics of the component (such as worst-case timing, memory, throughput), and other relevant information that supports the use of the component [12].

Recommendation 14: Document the design rationale. In order to effectively reuse a software component, its design rationale must be well documented. Both the design decisions of the component itself and the design decisions of the system that first integrates the component are important. The documented design decisions help determine where a component can “appropriately and advantageously be reused” [13]. Additionally, the documented design decisions for the system using the component can also be helpful to determine if a component is suitable for reuse in another system. Dean Allemang divides design rationale into two categories: (1) internal rationale (the relation of parts of the design to other parts of the same design—the way in which a component interacts with other components), and (2) external rationale (the relation of parts of the design to parts of different designs—use of component in multiple systems) [13].

Recommendation 15: Document safety information for the user. During the development and subsequent reuse of a component, safety must be considered. It is essential that the component developer defines the failure conditions, safety features, protection mechanisms, architecture, limitations, software levels, interface specifications, and intended use of the component. All interfaces and configurable parameters must be analyzed to describe the functional and performance effects of these parameters and interfaces on the user. The analysis documents required actions by the user to ensure proper operation. Additionally, per AC 20-148, section 5.f, a RSC developer must “produce an analysis of the RSC’s behavior that could adversely affect the users’ implementation (as in vulnerabilities, partitioning requirements, hardware failure effects, requirements for redundancy, data latency, and design constraints for correct RSC operation). The analysis may support the integrator’s or applicant’s safety analysis” [12].

Recommendation 16: Focus on quality, not quantity. Since reusable components may have multiple users, it is important to ensure that each component works as required. McConnell writes: “Successful reuse requires the creation of components that are virtually error-free. If a developer who tries to employ a reusable component finds that it contains defects, the reuse program will quickly lose its shine. A reuse program based on low-quality programs can actually increase the cost of developing software… If you want to implement reuse, focus on quality, not quantity” [5]. When developing AC 20-148, the FAA emphasized the fact that the first approval of a reusable component requires a high level of certification authority oversight and involvement to ensure it functions correctly.

24.3 Reusing Previously Developed Software

Three aspects of PDS are considered in this section: (1) a process to evaluate PDS for inclusion in a safety-critical system, particularly when required to meet the civil aviation regulations, (2) special considerations when reusing PDS that was not developed using DO-178[ ],* and (3) additional factors to evaluate when the PDS is also COTS software.

24.3.1 Evaluating PDS for Use in Civil Aviation Products

To better understand the evaluation process for the use of PDS in a civil aviation product (an aircraft, engine, or propeller) a flowchart is provided (see Figure 24.1). Although it may not cover all situations that arise; it should cover the majority of those that pertain to civil aviation projects. Such an approach could also be applied to other domains but may require some modification. Each block in the flowchart is numbered and is described in the following:

Images

Figure 24.1 Process for evaluating PDS for civil aviation project.

  1. Determine if the PDS was approved in a civil aircraft installation. If the PDS was previously approved in a certified civil aircraft, engine, or propeller, it may be suitable for reuse without rework or re-verification. If it was not previously approved, it will need to show that it meets DO-178C or an equivalent level of assurance.

  2. Determine if the installation and the software level will be the same. If the software is being installed in the same system and used the same way (e.g., a traffic collision and avoidance system that is being installed in another aircraft without change to the hardware or software) and the safety impact is the same, the PDS is probably suitable for reuse without rework or re-verification. A similarity analysis at the system level is normally needed to evaluate the use of the software, confirm the safety considerations, and support the reuse claim (this information would likely go in the system-level plans). If the installation changes or the software level is inadequate, further evaluation is needed (see Block 4).

  3. Document the intent to use the PDS in the plans. The intent to use the PDS and the results of the similarity analysis need to be documented in the plans. Depending on the situation, this may be the system-level plans (e.g., if the entire system is reused) or in the Plan for Software Aspects of Certification (PSAC) (if only a software component is reused). The plans must explain the equivalent installation and adequacy of the software level, as explained in Items 1 and 2.

  4. Was the software originally approved using DO-178[ ]? This process block’s purpose is to identify if DO-178C or its predecessors were followed during the original development. If not, a gap analysis is performed to identify the gaps, and an approach for filling those gaps is identified (Blocks 5 and 6).

  5. Perform a gap analysis. A gap analysis involves an assessment of the PDS and its supporting data against the DO-178C objectives.

    For software of unknown pedigree (SOUP) this analysis may not be possible because the data is normally unavailable. Such software can usually only be approved to level D or E.

  6. Fill the gaps. Once the gaps have been identified, they need to be filled. DO-178C section 12.1.4, “Upgrading a Development Baseline,” provides guidance on this task. Depending on the original development, this may be a significant effort and may lead one to rethink the reuse plan. Some alternative approaches that may be used to fill the gaps are discussed later in this chapter (see Section 24.3.2). If the PDS is a COTS component, see Section 24.3.3 for additional recommendations to identify and fill the gaps.

  7. Determine if the software level is adequate. Determine the necessary software level for the new installation and determine if the software level of the PDS is adequate. For software not developed per DO-178[ ],* the answer is NO and a gap analysis is needed (see Blocks 5 and 6). If the software was developed per DO-178 or DO-178A, the equivalent levels for DO-178B and DO-178C are shown in Table 24.2.

  8. Upgrade the baseline. If the software level of the PDS is not adequate, the data (and possibly the software itself) will need to be updated to address the additional DO-178[ ] objectives. DO-178C section 12.1.4 describes this process. Normally, this involves additional verification activities. If the software was not developed using DO-178B or DO-178C, this upgrade may also require a gap analysis (Blocks 5 and 6) to bring it up to DO-178C compliance. The need for such an analysis will depend on the software level increase desired and the ability of the existing data to support safety needs and should be closely coordinated with the certification authority.

    Table 24.2 Software Level Equivalence

    Images

  9. Perform a usage domain analysis. For a new installation of the PDS, this analysis ensures the following, as a minimum:

    1. The software that will be used is the same as the originally approved software.

    2. There are no adverse effects on safety or operational capability of the aircraft.

    3. Equipment performance characteristics are the same.

    4. Any open problem reports are evaluated for impact in the new installation. If problem reports are not available, this could prohibit the reuse of the PDS.

    5. Ranges, data types, and parameters are equivalent to original installation.

    6. Interfaces are the same.

    Some of this analysis is already addressed by other blocks; however, Block 9 is intended to gather the information in one place and to more thoroughly evaluate the intended reuse. This analysis could identify required changes to the software. If the PDS will be modified, this analysis may be performed in conjunction with the change impact analysis and documented as part of the change impact analysis. However, if the software and its environment do not change, this usage domain analysis is included in the plans (normally the PSAC or system certification plan):

  10. Determine if the software or its environment changed. The usage domain analysis (Block 9) is used to determine if there are any changes to the PDS installation. Additionally, if the development environment (e.g., compiler, compiler settings, version of code generator, linker version) or software changes, a change impact analysis is needed and appropriate software life cycle data will need to be modified. DO-178C sections 12.1.2 and 12.1.3 discuss changes to installation, application, and development environment.

  11. Perform a change impact analysis. As noted in Chapter 10, modified software requires a change impact analysis to analyze the impact of the change and plan for re-verification.* As discussed in Chapter 10, the change impact analysis is also used to determine if the software change is major or minor. If the PDS is classified as legacy software (software developed to an older version of DO-178 than is required by the certification basis), this may impact how the software change is implemented. Per Order 8110.49, for minor changes, the process originally used for the legacy software may be used to implement the change (e.g., a minor change to a DO-178A compliant PDS may use the DO-178A process). However, if the change is classified as a major change, the change to the PDS and all subsequent changes to that PDS must be to the version of DO-178 required by the certification basis (probably DO-178C for new projects).*

  12. Follow applicable certification policy and guidance. In addition to the DO-178C section 12.1 guidance, most certification authorities have policy or guidance related to PDS and/or reuse (e.g., FAA Order 8110.49 and AC 20-148). Also, project-specific issue papers (or equivalent) may be raised when the existing guidance doesn’t fit the project-specific scenario.

  13. Document approach in plans and obtain agreement with certification authority. The plan to use PDS must be documented in the PSAC (or possibly a system-level certification plan). The PSAC describes the PDS functionality; its original pedigree; any changes to installation, software, or development environment; and results of change impact analysis and/or usage domain analysis. If the PDS was not developed to DO-178[ ], details of the gap analysis and alternative approach(es) should be explained. The PSAC should also explain any necessary changes and how they will be verified. Additionally, the PSAC explains how software configuration management and software quality assurance considerations for the software will be addressed per DO-178C section 12.1.5 and 12.1.6. Once the PSAC is completed, it is submitted to certification authority. (Most certification authorities will also want to see the change impact analysis. As noted in Chapter 10, the change impact analysis is often included in the PSAC.) It is important to obtain certification authority agreement before proceeding too far into the reuse effort.

  14. Follow the plans and implement per agreed process. Once the certification authority approves the plans (this could take a couple of iterations), the approved plans must be followed. All necessary rework, verification, and documentation should be performed. The change impact analysis and plans may require updates as the project evolves.

24.3.2 Reusing PDS That Was Not Developed Using DO-178[ ]

As noted in Figure 24.1 Blocks 5 and 6, if the software was not developed to DO-178[ ] then a gap analysis is performed to identify DO-178C objectives that are not satisfied. Depending on the original development, filling these gaps might be a trivial task or it could be extremely time consuming. For SOUP it may prove more efficient to start over than to try to reconstruct nonexistent data. For PDS that has some data available, filling the gaps may be a viable path. PDS that has an excellent track record, but not in civil aviation (e.g., a COTS RTOS, a military application, or an automotive component) may be successful using this approach.

The gap analysis examines the available data against what is required for DO-178C to determine how much additional work is required. In my experience, for levels A and B software, the gap tends to be rather wide. If source code is not available, level D is generally the highest one can go.* If there are very little data available but source code is available, service history and reverse engineering tend to be the most common options.

DO-278A section 12.4 and DO-248C section 4.5 identify typical alternative approaches used to ensure that PDS has the same level of confidence as would be the case if DO-178C objectives had been satisfied initially. Table 24.3 summarizes some of the more common approaches. In most scenarios, the PDS gaps are filled by a combination of these approaches. Service history is discussed later in this chapter and reverse engineering is examined in Chapter 25.

24.3.3 Additional Thoughts on COTS Software

As noted earlier, COTS software is a special kind of PDS. Examples of COTS software are included in Table 24.1. DO-278A section 12.4 provides some specific guidance for acquiring and integrating COTS into a safety-critical system. DO-178C does not provide equivalent guidance, but the DO-278A approach can be applied to a DO-178C project. DO-278A promotes the concept of a COTS software integrity assurance case that is developed to ensure that the COTS software provides the same level of confidence as software developed to DO-278A or DO-178C. The integrity assurance case for DO-178C compliance includes the following information [14]:

  • Claims about the integrity of the COTS software and which DO-178C objectives are not met by the case.

  • Environment where the COTS software will be used.

  • Requirements that the COTS software satisfies.

  • Identification, assessment, and mitigation of unneeded capabilities in the COTS software.

  • Explanation of DO-178C objectives that are satisfied with the existing COTS software data.

  • Identification of DO-178C objectives not satisfied and explanation of how an equivalent level of confidence will be achieved using alternative approaches.

  • Explanation of strategies to be taken.

    Table 24.3 Summary of Alternative Approaches for PDS

    Images

    Images

    Images

  • Identification of all data to support the case, including additional software life cycle data that will be generated using alternative methods.

  • List of assumptions and justifications used in the integrity assurance argument.

  • Description of processes for verifying that all uncovered objectives identified in the gap analysis are satisfied.

  • Evidence that the COTS software has the same integrity as would be the case had all objectives been met during the original development.

Additionally, the following COTS-specific concerns need to be addressed when determining the feasibility of COTS software in a safety-critical system:

  1. The availability of the supporting life cycle data.

  2. The suitability of the COTS software for safety-critical use. Many COTS software components were not designed with safety in mind.

  3. The stability of the COTS software. Evaluate the following factors:

    1. How often has it been updated?

    2. Have patches been added?

    3. Is the complete set of life cycle data for each update provided or available?

    4. Will the supplier provide notification when updates are available, explain why they occurred, provide problem reports that were fixed (to support change impact analysis), and provide updated data to support the change?

  4. The technical support available from the supplier. Consider the following:

    1. If issues are discovered while using the COTS software, is the supplier willing (and obligated) to fix them?

    2. Is the supplier willing to support certification?

    3. Is the supplier willing (and obligated) to disclose known issues throughout the entire life of the COTS software usage (perhaps issues identified by other users)?

    4. What will happen if the supplier goes out of business or decides to no longer support the software?

  5. The configuration control of the COTS software. Consider the following:

    1. Is there a problem reporting system in place?

    2. Are changes traced to the previous baseline(s)?

    3. Is the COTS software and its supporting life cycle data uniquely identified?

    4. Is there a controlled release process?

    5. Are open problem reports provided?

  6. Protection of the COTS software from viruses, security vulnerabilities, etc.

  7. Tools and hardware support for the COTS software.

  8. Compatibility of the COTS software with the target computer and interfacing systems.

  9. Modifiability or configurability of the COTS software.

  10. Ability to deactivate, disable, or remove unwanted or unneeded functionality in the COTS software.

  11. Adequacy of the supplier’s quality assurance.

24.4 Product Service History

This section briefly explains product service history and factors to consider when proposing it as an alternative method for PDS.

24.4.1 Definition of Product Service History

As noted in Table 24.3, DO-178C defines product service history as follows: “A continuous period of time during which the software is operated within a known environment, and during which successive failures are recorded” [3].

This definition includes the concepts of problem reporting, environment (including operational and target computer environments), and time.

Significant effort has been expended by both the industry and the certification authorities to identify a feasible way to use product service history for certification credit. The Certification Authorities Software Team (CAST)* published a paper on the subject (CAST-1); the FAA sponsored research in this area (resulting in a research report and a handbook); the one page of text in DO-178B grew to four pages in DO-178C; and DO-248C includes a seven-page discussion paper on the topic. However, despite the effort to clarify the subject, it is still difficult to implement.

DO-178C section 12.3.4 explains that a product service history case depends on the following factors [3]: (1) configuration management of the software, (2) effectiveness of problem reporting activity, (3) stability and maturity of the software, (4) relevance of product service history environment, (5) length of the product service history, (6) actual error rates in the product service history, and (7) impact of modifications. The section goes on to provide additional guidance on the relevance of the service history, the sufficiency of the accumulated service history, the collection and analysis of problems found during service history, and the information to include in the PSAC when posing service history as an alternative method.

Product service history may be applied to software that was not developed to DO-178[ ] or to software that was developed to a lower level of DO-178[ ] than is required for the desired higher level system.

24.4.2 Difficulties in Seeking Credit Using Product Service History

To date, it has been virtually impossible to make a successful claim using product service history alone. However, it has been successfully used to supplement other alternatives (e.g., reverse engineering or process recognition).

As noted in the FAA’s research report, entitled Software Service History Report, authored by Uma and Tom Ferrell, the definition of product service history in DO-178C is “very similar to the IEEE [Institute for Electronic and Electronic Engineers] definition of reliability, which is ‘the ability of a product to perform a required function under stated conditions for a stated period of time’ “ [15]. Neither DO-178C nor the certification authorities encourage software reliability models because historically they have not been proven accurate. Because of the similarity between reliability and service history, it is also difficult to make a satisfactory claim using product service history.

Another factor that tends to make product service history difficult to prove is that the data collected during the product’s history is often inadequate. Companies normally do not plan up front to make a product service history claim, so the problem reporting mechanism may not be in place to collect the data.

24.4.3 Factors to Consider When Claiming Credit Using Product Service History

When claiming credit for service history, the following need to be addressed:

  1. The service history relevance must be demonstrated. Per DO-178C section 12.3.4.1, to demonstrate the relevance of the product service history the following should occur [3]:

    1. The amount of time that the PDS has been active in service must be documented and adequate.

    2. The configuration of the PDS and the environment must be known, relevant, and under control.

    3. If the PDS was changed during the service history, the relevance of the service history for the updated PDS needs to be evaluated and shown to be relevant.

    4. The intended usage of the PDS must be analyzed to show the relevance of the product service history (to demonstrate software will be used in the same way).

    5. Any differences between the service history environment and the environment in which the PDS will be installed must be evaluated to ensure the history applies.

    6. An analysis must be performed to ensure any software that was not used in production (e.g., deactivated code) is not seeking service history credit.

  2. The service history must be sufficient. In addition to showing the relevance of the service history, the amount of service history must be shown to be sufficient to satisfy the system safety objectives, including the software level. The service history must also satisfactorily address the DO-178C gaps that it is intended to fill.

  3. In-service problems must be collected and analyzed. In order to make a claim of service history, it must be demonstrated that problems that occurred during the PDS service history are known, documented, and acceptable from a safety perspective. This requires evidence of an adequate problem reporting process. As noted earlier, this can be a difficult task to carry out.

The FAA’s Software Service History Handbook identifies four categories of questions to ask when proposing a product service history claim [16].

These are excellent questions. If one can satisfactorily answer these, it might be feasible to claim service history. If not, it will be difficult to make a successful case to the certification authorities. The categories of questions are noted here:

  • 45 questions related to problem reporting

  • 11 questions about the operation (comparing the similarity of operation of the PDS within the previous domain as compared with the target domain)

  • 12 questions about the environment (assessing the computing environment to assure that the environment in which the PDS was hosted during the service history is similar to the proposed environment)

  • 19 questions about time (evaluating the service history time duration and error rates using the data available from product service history)

For convenience, the specific questions are included in Appendix D. For additional information on product service history, consult DO-178C (section 12.3.4) [3], DO-248C (discussion paper #5) [17], and the FAA’s Software Service History Handbook [16].

References

1. R. Reihl, Can software be safe?—An ADA viewpoint. Embedded Systems Programming December 1997.

2. N. Leveson, Safeware: System Safety and Computers (Reading, MA: Addison-Wesley, 1995).

3. RTCA DO-178C, Software Considerations in Airborne Systems and Equipment Certification (Washington, DC: RTCA, Inc., December 2011).

4. A. Lattanze, A component-based construction framework for DoD software systems development, CrossTalk November 1997.

5. S. McConnell, Rapid Development (Redmond, WA: Microsoft Press, 1996).

6. B. Meyer, Rules for component builders, Software Development 7(5), 26–30, May 1999.

7. J. Sodhi and P. Sodhi, Software Reuse: Domain Analysis and Design Process (New York: McGraw-Hill, 1999).

8. RTCA DO-297, Integrated Modular Avionics (IMA) Development Guidance and Certification Considerations (Washington, DC: RTCA, Inc., November 2005).

9. A. Rhodes, Component based development for embedded systems, Embedded Systems Conference (San Jose, CA, Spring 1999), Paper #313.

10. J. D. Mooney, Portability and reuse: Common issues and differences, Report TR 94-2 (Morgantown, WV: West Virginia University, June 1994).

11. J. D. Mooney, Issues in the specification and measurement of software portability, Report TR 93-6 (Morgantown, WV: West Virginia University, May 1993).

Federal Aviation Administration, Reusable Software Components, Advisory Circular 20-148 (Washington, DC: Federal Aviation Administration, December 2004).

13. D. Allemang, Design rationale and reuse, IEEE Software Reuse Conference (Orlando, FL, 1996).

14. RTCA DO-278A, Guidelines for Communications, Navigation, Surveillance, and Air Traffic Management (CNS/ATM) systems software integrity assurance (Washington, DC: RTCA, Inc., December 2011).

15. U. D. Ferrell and T. K. Ferrell, Software service history report, DOT/FAA/ AR-01/125 (Washington, DC: Office of Aviation Research, January 2002).

16. U. D. Ferrell and T. K. Ferrell, Software Service History Handbook, DOT/FAA/ AR-01/116 (Washington, DC: Office of Aviation Research, January 2002).

17. RTCA DO-248C, Supporting Information for DO-178C and DO-278A (Washington, DC: RTCA, Inc., December 2011).

18. Federal Aviation Administration, Software Approval Guidelines, Order 8110.49 (Washington, DC: Federal Aviation Administration, Change 1, September 2011).

*DO-178[ ], indicates DO-178, DO-178A, DO-178B, or DO-178C.

*DO-178[ ] could be DO-178, DO-178A, DO-178B, or DO-178C.

*DO-178[ ] may be DO-178, DO-178A, DO-178B, or DO-178C.

*DO-178C sections 12.1.1.c and 12.1.1.d also require a change impact analysis

*At this time, Order 8110.49 (change 1) is in effect [18]. The approach for legacy software could change when Order 8110.49 is updated to recognize DO-178C.

DO-178C section 12.1.5 ensures that a change management process is implemented, including traceability to previous baseline and a change control system with problem reporting, problem resolution, and change tracking.

DO-178C section 12.1.6 ensures that software quality assurance has evaluated the PDS and any changes to the PDS.

*FAA Order 8110.49 chapter 8, provides some insight into how to comply with level D objectives [18].

*CAST is a team of international certification authorities who strive to harmonize their positions on airborne software and aircraft electronic hardware in CAST papers.

Brackets added for clarity.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.144.248