6 Software Requirements

Acronym

CASE computer-aided software engineering
CAST Certification Authorities Software Team
FAA Federal Aviation Administration
FAQ frequently asked question
HLR high-level requirement
IEEE Institute of Electrical and Electronic Engineers
LLR low-level requirement
SWRD Software Requirements Document
TBD to be determined

6.1 Introduction

The software requirements are foundational to DO-178C compliance and safety-critical software development. The success or failure of a project depends on the quality of the requirements. As Nancy Leveson writes:

The vast majority of accidents in which software was involved can be traced to requirements flaws and, more specifically, to incompleteness in the specified and implemented software behavior—that is, incomplete or wrong assumptions about the operation of the controlled system or required operation of the computer and unhandled controlled-system states and environmental conditions. Although coding errors often get the most attention, they have more of an effect on reliability and other qualities than on safety [1].

As the requirements go, so the project goes. The most chaotic projects I’ve experienced or witnessed started with bad requirements and deteriorated from there. Likewise, the best projects I’ve seen are ones that spent the effort to get the requirements right. Several fundamentals of effective requirements were elaborated in Chapter 2, when discussing system requirements. Therefore, I encourage you to read or review Section 2.2, if you haven’t done so recently. This chapter builds on the Section 2.2 concepts, with emphasis on software requirements rather than system requirements.

Many of the items discussed in this chapter also apply to the system requirements and can augment the material presented in Chapter 2. The line between system requirements and software requirements is often very fuzzy. In general, the software requirements refine the validated system requirements and are used by the software developers to design and implement the software. Also, software requirements identify what the software does, rather than what the system does. When writing the software requirements, errors, deficiencies, and omissions in the system requirements may be identified and should be documented in problem reports and resolved by the systems team.

This chapter examines the importance of good requirements and how to write, verify, and manage requirements. Additionally, the chapter ends with a discussion on prototyping and traceability—two topics closely related to requirements development.

6.2 Defining Requirement

The Institute of Electrical and Electronic Engineers (IEEE) defines a requirement as follows [2]:

  1. A condition or capability needed by a user to solve a problem or achieve an objective.

  2. A condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.

  3. A documented representative of a condition or capability as in 1 or 2.

The DO-178C glossary defines software requirement, high-level requirements, low-level requirements, and derived requirements as follows [3]:

  • Software requirement—”A description of what is to be produced by the software given the inputs and constraints. Software requirements include both high-level requirements and low-level requirements.”

  • High-level requirements—”Software requirements developed from analysis of system requirements, safety-related requirements, and system architecture.”

  • Low-level requirements—”Software requirements developed from high-level requirements, derived requirements, and design constraints from which Source Code can be directly implemented without further information.”

  • Derived requirements—”Requirements produced by the software development processes which (a) are not directly traceable to higher level requirements, and/or (b) specify behavior beyond that specified by the system requirements or the higher level software requirements.”

Unlike the IEEE definition, DO-178C defines two levels of software requirements: high-level requirements (HLRs) and low-level requirements (LLRs). This chapter concentrates on HLRs, which will simply be referred to as requirements throughout this chapter. DO-178C includes LLRs in the design; therefore, they will be discussed in the next chapter.

In general, requirements are intended to “describe what we’re going to have when we’re done with a project” [4]. Software requirements normally address the following: functionality, external interfaces, performance, quality attributes (e.g., portability or maintainability), design constraints, safety, and security.

Good requirements do not address design or implementation details, project management details (such as cost, schedule, development methodology), or test details.

6.3 Importance of Good Software Requirements

Let’s consider five reasons why requirements are so important to safetycritical software development.

6.3.1 Reason 1: Requirements Are the Foundation for the Software Development

I’m not an architect, but common sense tells me that when building a house, the foundation is extremely important. If the foundation is made of weak or faulty material, is missing sections, or is not level, the house built upon it will have long-term problems. The same is true in software development. If the requirements (the foundation) are weak, it has long-term effects and leads to indescribable problems and difficulties for everyone involved, including the customer.

Reflecting on the projects I’ve survived over the years, there are some common characteristics that led to the bad requirements. First, the software requirements were developed by inexperienced teams. Second, the teams were pushed to ship something to the customer before they were ready. Third, the system requirements were not validated prior to delivering the software.

These characteristics led to the following common results:

  • The customers were not pleased.

  • The products suffered from an extremely high number of problem reports.

  • The software required at least one complete redesign (in a couple of situations it took two additional iterations).

  • The projects were considerably over time and over budget.

  • Several leaders were reassigned (right off to other doomed projects) and careers were damaged.

The results of bad requirements lead to what I call the snowball effect. The problems and complexity accumulate until the project becomes a huge, unmanageable snowball. Figure 6.1 illustrates the problem. When development artifacts are not reviewed and matured before going to the next phase of development, it becomes more difficult and expensive to identify and remove the error(s) later. In some instances it may even become impossi ble to identify the error, since the root cause is buried so deeply in the data

Images

Figure 6.1 Effects of bad requirements and inadequate review process.

(the snowball). All development steps are iterative and subject to change as the project progresses; however, building successive development activities on incomplete and erroneous inputs is one of the most common errors and inefficiencies in software engineering.

Requirements will never be perfect the first time, but the goal is to get at least a portion of the requirements as complete and accurate as possible to proceed with design and implementation at an acceptable level of risk. As time progresses, the requirements are updated to add or modify functionality and to make changes based on design and implementation maturity. This iterative approach is the most common way to obtain quality requirements and meet customer needs at the same time. In Software Requirements Karl Wiegers writes,

Iteration is a key to requirements development success. Plan for multiple cycles of exploring requirements, refining high-level requirements into details, and confirming correctness with users. This takes time and it can be frustrating, but it’s an intrinsic aspect of dealing with the fuzzy uncertainty of defining a new software product [4].

6.3.2 Reason 2: Good Requirements Save Time and Money

Effective requirements engineering is probably the highest return on investment any project can realize. Multiple studies show that the most expensive errors are those that started in the requirements phase and that the biggest reason for software rework is bad requirements. One study even showed that “requirements errors account for 70 to 85 percent of the rework cost” [5].

A study by the Standish Group gave the following reasons for failures in software projects (several of the reasons are requirements related) [6]:

  • Incomplete requirements—13.1%

  • Lack of user involvement—12.4%

  • Insufficient resources/schedule—10.6%

  • Unrealistic expectations—9.9%

  • Lack of managerial support—9.3%

  • Changing requirements—8.7%

  • Poor planning—8.1%

  • Software no longer needed—7.4%

Although this study is somewhat dated, the results are consistent with what I see in aviation projects year after year. Good requirements are necessary to obtain successful results. I never cease to be amazed at how many projects do not have time to do a project right the first time, but end up having time and money to do the work two or three times.

6.3.3 Reason 3: Good Requirements Are Essential to Safety

The Federal Aviation Administration (FAA) research report entitled Requirements Engineering Management Findings Report states: “Investigators focusing on safety-critical systems have found that requirements errors are most likely to affect the safety of an embedded system than errors introduced during design or implementation” [7]. Without good requirements, it’s impossible to satisfy the regulations. The FAA and other regulatory authorities across the world have legally binding regulations requiring that every system on an aircraft show that it meets its intended function under any foreseeable operating condition. This means that the intended functions must be identified and proven. Requirements are the formal method of communicating the safety considerations and the intended functionality.

6.3.4 Reason 4: Good Requirements Are Necessary to Meet the Customer Needs

Without accurate and complete requirements, the customer’s expectations will not be met. Requirements are used to communicate between the customer and developer. Poor requirements indicate poor communication which normally leads to a poor product.

One level A project that I reviewed years ago had terrible software requirements and no hope of finishing the level A activities in time to support the aircraft schedule. Because of the software shortcomings, the customer had to redesign part of the aircraft to disable the system in safety-critical operations, add hardware to compensate for what the software was supposed to do, and reduce the reliance on the software to level D. After the initial certification, a new supplier was selected and the original supplier fired. The main reasons for this fiasco were as follows: (1) the software requirements did not comply with the customer’s system requirements and (2) when the system requirements were unclear, the software team just improvised rather than asking the customer what they needed. Granted, the system requirements had their issues, but the whole debacle could have been avoided with better communication and requirements development at both companies.

6.3.5 Reason 5: Good Requirements Are Important for Testing

Requirements drive the testing effort. If the requirements are poorly written or incomplete, the following are possible:

  • The resulting requirements-based tests may test the wrong thing and/or incompletely test the right thing.

  • It may require extensive effort during testing to develop and ferret out the real requirements.

  • It will be difficult to prove intended functionality.

  • It will be challenging or impossible to prove that there is no unintended functionality (i.e., to show that the software does what it is supposed to do and only what it is supposed to do).

Safety hinges upon the ability to show that intended functions are satisfied and that no unintended functionality will impact safety.

6.4 The Software Requirements Engineer

Requirements development is typically performed by one or more requirements engineers (also known as requirements analysts).* Most successful projects have at least two senior requirements engineers who work closely together throughout the project. They work together in developing the requirements, and constantly review each other’s work as it progresses. They also perform ongoing sanity checks to ensure the requirements are aligned and viable. It’s always good to have some junior engineers working with the senior engineers, since the knowledge will be needed for future projects, and organizations need to develop the requirements experts of the future. Organizations should take care when selecting the engineers trusted with requirements development; not everyone is capable of developing and documenting good requirements. The skills needed for an effective requirements engineer are discussed in the following:

Skill 1: Requirements authoring experience. There is no substitute for experience. Someone who has been through multiple projects knows what works and what does not. Of course, it’s best if their experience is based on successful projects, but not-so-successful projects can also build experience if the individual is willing to learn from the mistakes.

Skill 2: Teamwork. Since requirements engineers interact with just about everyone on the team, it is important that they be team players who get along well with others. The software requirements engineer will work closely with the systems engineers (and possibly the customers), the designers and coders, the project manager, quality assurance, and testers. Lone Rangers and egomaniacs are difficult to work with and often do not have the best interest of the team in mind.

Skill 3: Listening and observation skills. Requirements engineering often involves discerning subtle clues from the systems engineers or customer. It requires the ability to detect and act on missing elements—it’s not just a matter of understanding what is there but also determining what is not there and should be there.

Skill 4: Attention to both big picture and details. A requirements engineer must not only be capable of seeing how the software, hardware, and system fit together but also able to visualize and document the details. The requirements engineers needs to be capable of thinking both top-down and bottom-up.

Skill 5: Written communication skills. Clearly, one of the major roles of the requirements engineer is to document the requirements. He or she must be able to write in an organized and clear style. Successful requirements engineers are those who can clearly communicate complex ideas and issues. Additionally, the requirements engineer should be proficient in using graphical techniques to communicate ideas that are difficult to explain in text. Examples of graphical techniques used are tables, flowcharts, data flow diagrams, control flow diagrams, use cases, state diagrams, state charts, and sequence and timing diagrams.

Skill 6: Commitment. The lead requirements engineer(s) should be someone who is committed to seeing the project through to the end. I’ve seen sev eral projects suffer because the lead engineer took another job, and left with much of the vital knowledge in his head. This is yet another reason why I recommend a team approach to requirements development.

Skill 7: Domain experience. It is advisable to have a requirements engineer who is knowledgeable in the domain. Someone experienced in navigation systems may not know the subtleties required to specify a brake or fuel system. While somewhat subjective, I find that it’s best if at least half of the development team has domain experience.

Skill 8: Creative. Brute-forced requirements are typically not the most effective. Good requirements are a mix of art and science. It takes creative thinking (the ability to think outside the box) to develop optimal requirements that capture the intended functions and prevent the unintended functions.

Skill 9: Organized. Unless the requirements engineer is organized, his or her output may not effectively communicate what is intended. Also, since the requirements engineers are crucial to the project, they may be frequently distracted with less importance tasks; therefore, they must be able to organize, prioritize, and stay focused.

6.5 Overview of Software Requirements Development

Chapter 5 examined the planning process that precedes the requirements development. There are several requirements aspects to consider during planning, including the requirements methodology and format, use of requirements management tools, development team identification, requirements review process, trace strategy, requirements standards definition, etc. Planning is essential to effective requirements development.

The requirements development effort can be organized into seven activities: (1) gather and analyze input; (2) write requirements; (3) review requirements; (4) baseline, release, and archive requirements; (5) implement requirements; (6) test requirements; and (7) change requirements using the change management process. Figure 6.2 illustrates these activities. Activities 1–3 are discussed in this chapter; activities 4 and 7 are covered in Chapter 10; activity 5 is discussed in Chapters 7 and 8; and activity 6 is addressed in Chapter 9.

Images

Figure 6.2 Requirements development activities.

Each project goes about the system and software requirements development in a slightly different manner. Table 6.1 summarizes the most common approaches for developing systems and software requirements, along with advantages and disadvantages for each approach. As more systems teams and certification authorities embrace ARP4754A, the state of system requirements should improve. High-quality requirements depend on effective communication and partnership between the stakeholders, including the customer, the systems team (including safety personnel), the hardware team, and the software team.

Table 6.1 System and Software Requirements Development Approaches

Approach Advantages (Pros) Disadvantages (Cons)
1. Supplier-driven product: System and software requirements are developed by the same company.
  • Encourages more open communication.

  • Typically means both systems and software have the same (or very similar) review and release process.

  • Domain expertise within supplier.

  • Potential for high reuse and reapplication.

  • Customer and systems teams may not specify things in as much detail.

  • Teams may not be colocated, even though in the same company (e.g., different buildings or different geographical locations).

  • Oftentimes, the software team is more disciplined in documenting requirements than the systems team (because of the DO-178C requirements).

2. Customer-driven product: System requirements are developed by the customer (e.g., aircraft company) and sent to supplier for software implementation.
  • Oftentimes, customer requirements are very detailed when they intend to outsource.

  • Allows customer to select a supplier with appropriate domain expertise, which may not exist in-house.

  • There may be a throw-itover-the-wall mentality.

  • System requirements may be written at the wrong level, overly prescriptive, and limit the software design options.

  • Customer may be slow or resistant to updating system requirements, when deficiencies are found.

  • Supplier may not have the ability to test all requirements.

3. Combined requirements: System and software requirements are combined into a single level (by the same company).
  • Ensures consistency between system and software requirements.

  • Potential for less duplication of test effort.

  • If a simple product, may eliminate artificial requirements layers.

  • Makes it challenging to have the appropriate level of granularity and detail in the requirements.

  • May force the system requirements to be too detailed or may leave the software requirements at an inappropriately high level.

  • May cause problems with allocation to hardware and software.

  • Other than a simple product, it is difficult to show the certification authority that all objectives are covered.

6.6 Gathering and Analyzing Input to the Software Requirements

DO-178C assumes that the system requirements given to the software team are fully documented and validated. But, in my experience that is rarely the case. Eventually, the system requirements must be complete, accurate, correct, and consistent; and, the sooner that happens, the better. However, it often falls on the software team to identify requirements deficiencies or ambiguities and to work with the systems team to establish a fully validated set of system requirements.

There is often considerable pressure for the requirements engineer to create the requirements specification without first spending the time analyzing the problem. This results in brute-forced requirements, which are bulky, disorganized, and unclear. Just as an artist takes time to plan a masterpiece, the requirements engineer needs time to gather, analyze, and organize the requirements effort. It’s a matter of getting the problem clearly visualized. Once this is done, the actual writing of the specification occurs relatively quickly and requires less rework.

The software requirements engineers typically perform the following gathering and analysis activities in order to develop the software requirements.

6.6.1 Requirements Gathering Activities

Before ever writing a single requirement, the requirements engineers gather data and knowledge in order to comprehend the product they are specifying. Gathering activities include the following:

  1. Review and strive to thoroughly understand the system and safety requirements, in whatever state they exist. The software engineers should become intimately familiar with the system requirements. An understanding of the preliminary safety assessment is also necessary to comprehend the safety drivers.

  2. Meet with the customers, systems engineers, and domain experts to answer any questions about the system requirements and to fill in missing information.

  3. Determine the maturity and completeness of the systems and safety requirements before developing software requirements.

  4. Work with systems engineers to make modifications to the sys tem requirements. Before the software team can refine the system requirements into software requirements, the system requirements need to be relatively mature and stable. Some systems teams are quite responsive and work closely with the software team to update the system requirements. However, in many circumstances, the software team must proactively push the systems engineers for the needed changes.

  5. Consider relevant past projects, as well as the problem reports for those projects. Oftentimes, the customer, systems engineers, or the software developers will have some past experience in the domain area. The requirements may not be useable in their previous format, but they can certainly help provide an understanding of what was implemented, what worked, and possibly what did not work.

  6. Become fully educated on the requirements standards and certification expectations.

6.6.2 Requirements Analyzing Activities

The analysis process prepares the engineer to write the requirements. Sometimes it is tempting to rush directly into writing the requirements specification. However, without first doing the analysis and considering the problem from multiple views, it may result in significant rework in the future. There will always be some iterations and fine tuning, but the analysis process can help minimize that. Consider the following during requirements analysis:

  1. Organize the input gained during the gathering process to be as clear and complete as possible. The requirements engineer must analyze the problem to be solved from multiple views. Oftentimes, use cases* are utilized to consider the problem from the user’s perspective.

  2. Lay out the framework for the requirements specification task. By identifying what will likely become the table of contents, the requirements can be organized into a logical flow. This framework also serves as a measure of completeness. One common strategy for determining the requirements framework is to list the safety and systems functions to be provided, and then determine what software is needed for each function to operate as intended and to protect against unintended effects.

  3. Develop models of the software behavior in order to ensure an understanding of the customer needs. Some of these models will be refined and integrated into the software requirements and some will just be a vehicle that assists with the requirements development. Some of the models may also be added to the system requirements when deficiencies are noted.

  4. In some cases, a prototype may be used to help with the development of the requirements. The requirements analyst may either help develop the prototype or use the prototype to document requirements. A quick prototype can be very beneficial for demonstrating functionality and maturing the requirements details. The risk of a prototype is that it can look impressive on the surface, despite the poor underlying design and code; therefore, project managers or customers insist on using the code from the working prototype. When faced with cost and schedule pressures, I’ve seen several projects abandon their original (and approved) plans and try to use the prototype code for flight test and certification. I have yet to see it work out well; the cost, schedule, and customer relationship all suffer. Section 6.10 provides additional thoughts about prototyping.

6.7 Writing the Software Requirements

The gathering and analysis activities are ongoing. Once sufficient knowledge is gained and the problem sufficiently analyzed, the actual requirements writing commences. Requirements writing involves many parallel and iterative activities. These are presented as tasks because they are not serial activities. Six tasks are explained on the next several pages.

6.7.1 Task 1: Determine the Methodology

There are multiple approaches to documenting requirements—all the way from text-only to all-graphical to a combination of text and graphics. Because of the need for traceability and verifiability, many safety-critical software requirements are primarily textual with graphics used to further illustrate the text. However, graphics do play a significant role in the requirements development effort. “Pictures help bridge language and vocabulary barriers …” [4]. At this phase, the graphics focus on the requirements (what the software will do) and not design (how the software will do it). Many of the graphics may be further elaborated in the design phase, but at the requirements phase they should strive to be implementation-free. Developers may opt to document some design concepts as they write the requirements, but those should be notes for the designer and not part of the software requirements specification.

Just a brief word of caution, be careful to not rely solely on pictures, which are difficult to test. When using graphics to describe the requirements, testability of the requirements should be constantly considered.

Some examples of graphical techniques that enhance the textual descriptions include the following [9]:

  • Context or use case diagrams—illustrate interfaces with external entities. The details of the interfaces are typically identified in the interface control specification.

  • A high-level data dictionary—defines data that will flow between processes. The data dictionary will be further refined during the design phase.

  • Entity-relationship or class diagrams—show logical relationship between entities.

  • State-transition diagrams—show the transition between states within the software. Each state is usually described in textual format.

  • Sequence diagrams—show the sequence of events during execution, as well as some timing information.

  • Logic diagrams and/or decision tables—identify logic decisions of functional elements.

  • Flowcharts or activity diagrams—identify step-by-step flow and decisions.

  • Graphical user interfaces—clarify relationships that may be difficult to describe in text.

The model-based development approach strives to increase the graphical representation of the requirements through the use of models. However, to date, even models require some textual description. Model-based development is examined in Chapter 14.

The technique selected should address the connection between textual and graphical representations. The textual requirements normally provide the context and reference to the graphical figures. This helps with traceability and completeness, and facilitates testing. Sometimes the graphics are provided as reference only to support the requirements. If this is the case, it should be clearly stated.

Another aspect of methodology to consider before diving into the requirements writing is whether to use computer-aided software engineering (CASE) tools and requirements templates, since these may impact the methodology.

Ideally, the application of the selected methodology is explained and illustrated in the requirements standards. The standards help guide the requirements writers, and ensure that everyone is following the same approach. If there are multiple developers involved in the requirements writing, an example of the methodology and layout should be documented, so that everyone is applying it in the same way. The more examples and details provided, the better.

6.7.2 Task 2: Determine the Software Requirements Document Layout

The end result of the software requirements documentation process is the Software Requirements Document (SWRD). The SWRD

states the functions and capabilities that a software system must provide and the constraints that it must respect. The SWRS [SWRD] is the basis for all subsequent project planning, design, and coding, as well as the foundation for system testing and user documentation. It should describe as completely as necessary the [software] system’s behaviors under various conditions. It should not contain design, construction, testing, or project management details other than known design and implementation constraints [4].*

The SWRD should comprehensively explain the software functionality and limitations; it should not leave room for assumptions. If some functionality or quality does not appear in the SWRD, no one should expect it to magically appear in the end product [4].

Early in the requirements development process, the general layout of the SWRD should be determined. It may be modified later, but it’s valuable to have a general outline or framework to start with, especially if there are multiple developers involved. The outline or template helps keep everyone focused on their area’s objectives and able to understand what will be covered elsewhere. In order to improve readability and usability of the SWRD, the following suggestions are provided:

  • Include a table of contents with subsections.

  • Provide an overview of the document layout, including a brief summary of each section and the relationship between sections.

  • Define key terms that will be used throughout the requirements and use them consistently. This may include such things as coordinate systems and external feature nomenclature. If the requirements will be implemented or verified by anyone unfamiliar with the language or domain, this is a critical element of the SWRD.

  • Provide a complete and accurate list of acronyms.

  • Explain the requirements grouping in the document (a graphic, such as a context diagram, might be useful for this).

  • Organize the document logically and use section/subsection labels and numbers (typically requirements are organized by features or key functionality).

  • Identify the environment in which the software will operate.

  • Use white space throughout to help readability.

  • Use bold, italics, and underlining for emphasis and use them consistently throughout. Be sure to explain the meaning of any conventions, text styles, etc.

  • Identify functional requirements, nonfunctional or nonbehavioral requirements, and external interfaces.

  • Include any constraints that will apply (e.g., tool constraints, language constraints, compatibility constraints, hardware limitations, or interface conventions).

  • Number and label figures and tables, and clearly reference them from the textual requirements.

  • Provide cross-references within the specification and to other data as needed.

6.7.3 Task 3: Divide Software Functionality into Subsystems and/or Features

It is important to decompose the software into manageable groups, which make sense and can be easily integrated. In most cases, a context diagram or use case is used to provide the high-level view of the requirements organization.

For larger systems, the software may be divided into subsystems. For smaller systems and subsystems, the software is often further divided into features, with each feature containing specific functionality.

As noted earlier, one way to organize the functions is to work with the safety and systems engineers to define the safety and systems functions to be provided. This input is then used to determine what software is needed for each function to operate as intended, as well as to determine what protections are needed to prevent unintended effects.

There are other ways to organize the requirements. Regardless of the approach selected, reuse and minimized impact of change are normally important characteristics to consider when dividing the functionality.

6.7.4 Task 4: Determine Requirements Priorities

Because projects are often on a tight schedule and are developed using an iterative or spiral life cycle model, it may be necessary to prioritize which requirements must be defined and implemented first. Priorities should be coordinated with the systems team and the customers. After the enabling software (such as boot, execution, and inputs/outputs), software functions that are critical to system functionality or safety or that are highly complex should have the highest priority. Oftentimes, the priorities are based on the urgency defined by the customer and the importance to the overall system functionality. In order to properly prioritize, it is sometimes helpful to divide the subsystems, features, and/or functions into four groups: (1) urgent/important (high priority), (2) not urgent/important (medium priority), (3) urgent/not important (low priority), and (4) not urgent/not important (may not be necessary to implement). For large and complex projects, there are many factors to assess when determining the priority. In general, it is preferable to keep the prioritization process as simple and politically free as possible. The prioritization of subsystems, features, and/or functions should be identified in the project management plan. Keep in mind that priorities may need to be readjusted as the project progresses, depending on the customer feedback and project needs.

6.7.5 A Brief Detour (Not a Task): Slippery Slopes to Avoid

Before discussing the requirements documenting task, let’s take a quick detour to discuss a few slippery slopes that many unsuccessful projects seem to attempt. These are mentioned here because projects sometimes start down one or more of these slippery slopes instead of focusing on the requirements. Once a project starts down a slope, it’s hard to exit it and refocus on the requirements.

6.7.5.1 Slippery Slope #1: Going to Design Too Quickly

One of the most challenging parts of authoring requirements is to avoid designing. By nature, engineers tend to be problem solvers and want to go straight to design. The schedule pressures also force engineers to go to design too quickly—before they’ve really solidified what it is they’re trying to build. However, going to design too quickly can lead to a compromised implementation, since the best implementation is often not apparent until the problem is fully defined. Requirements engineers may consider several potential solutions to highlight the trade-offs. Thinking through options frequently helps mature the requirements and identifies what is really needed in the software. However, the implementation details do not belong in the requirements themselves.

In order to avoid this slippery slope, I suggest that engineers do the following:

  1. Clearly define the division between requirements (what) and design (how) in the standards.

  2. Ensure that the requirements process allows for notes or comments. This annotation provides valuable information that will convey the intent without burdening the high-level requirements with design details.

  3. Make sure that the process allows for documenting design ideas. Having some potential solutions identified can jump-start the design process.

6.7.5.2 Slippery Slope #2: One Level of Requirements

DO-178C identifies the need for both high-level requirements and lowlevel requirements. The high-level requirements are documented in the SWRD and the low-level requirements are part of the design. Developers frequently attempt to identify one level of software requirements that they directly code from (i.e., they combine high-level and low-level requirements into a single level of requirements). Good software engineering practices and experience continually warn against this. Requirements and design are separate and distinct processes. Out of the numerous projects I’ve assessed, I’ve rarely seen the successful combination of highlevel requirements and low-level requirements into one level. Therefore, I recommend that this approach be avoided, except in well-justified cases. I find it more beneficial to document implementation ideas in a draft design document while writing the requirements (i.e., work on the requirements and design in parallel rather than combining them). Some believe modeling will remove the need for two levels of requirements; however, as will be discussed in Chapter 14, this is not the case. I caution anyone who wants to have one level of requirements in order to get to implementation faster. It might look good at first, but tends to fail in the end. The test ing and verification phase is one area where issues arise. For the more critical software (levels A and B), it is difficult to get structural coverage when the requirements are not sufficiently detailed (one level of requirements may not have enough details for full structural coverage). Also, for level D projects where the low-level testing is not required, combining the requirements can result in more testing (since merging high-level and lowlevel requirements tends to result in more detailed requirements than the traditional high-level requirements).

DO-248C’s FAQ #81, entitled “What aspects should be considered when there is only one level of requirements (or if high-level requirements and low-level requirements are merged)?” and the Certification Authorities Software Team (CAST)* paper CAST-15 entitled “Merging High-Level and Low-Level Requirements” warn against merging software requirements into a single level [10,11]. There are a few projects where only one level of software requirements is needed (e.g., a low-level functionality like parts of an operating system or a math library function), but they are in the minority.

6.7.5.3 Slippery Slope #3: Going Straight to Code

Over the last few years there has been an increased tendency for projects to start coding right away. They might have some concepts and a few requirements documented, but when pressured to get functional software in the field, they just implement it the best they can. Later, the developers are pressured to use the kludged or ad hoc code for production. The code usually doesn’t align with the requirements that are eventually developed, it has none of the design decisions documented, and it often does not consider offnominal conditions. Furthermore, the prototyped code is normally bruteforced and is not the best and safest solution. I have seen months and even years added to the project schedule, as the project tries to reverse engineer the design decisions and missing requirements from the prototype code. Oftentimes, during the reverse engineering effort, the project discovers that the code wasn’t complete or robust, let alone safe.

Let’s now exit the slippery slopes and consider what is involved in documenting the high-level software requirements (hopefully these will help teams avoid the slopes).

6.7.6 Task 5: Document the Requirements

One of the challenges of writing requirements is determining the level of detail. Sometimes the system requirements are very detailed, so it forces a lower level of detail in the software requirements than preferred. At other times, the system requirements are far too vague and require additional work and decomposition by the software requirements authors. The level of detail is a judgment call; however, the requirements should sufficiently explain what the software will do but not get into the implementation details. When writing the software high-level requirements, it is important to remember that the design layer (which includes the software low-level requirements) is still to occur. The software high-level requirements should provide adequate detail for the design effort, but not get into the design.

6.7.6.1 Document Functional Requirements

The majority of the requirements in the SWRD are functional requirements (also known as, behavioral requirements). Functional requirements

define precisely what inputs are expected by the software, what outputs will be generated by the software, and the details of rela tionships that exist between those inputs and outputs. In short, behavioral requirements describe all aspects of interfaces between the software and its environment (that is, hardware, humans, and other software) [12].

Basically, the functional requirements define what the software does. As previously discussed, they are generally organized by subsystem or feature and are documented using a combination of natural language text and graphics.

The following concepts should be considered when documenting the functional requirements:

  • Organize the requirements into logical groupings.

  • Aim for understandability by customers and users who are not software experts.

  • Document requirements that are clear to designers (use comments or notes to expand potentially challenging areas).

  • Document requirements with focus on external software behavior, not the internal behavior (save that for design).

  • Document requirements that can be tested.

  • Use an approach that can be modified (including requirements numbering and organization).

  • Identify the source of the requirements (see Sections 6.7.6.6 and 6.11 for more on traceability).

  • Use text and graphics consistently.

  • Identify each requirement (see Section 6.7.6.4 on unique identification).

  • Minimize redundancy. The probability of discrepancies increases each time the requirements are restated.

  • Follow the requirements standards and agreed upon techniques and template. If a standard, technique, or template does not meet the specific need, determine if an update or waiver to plans, standards, or procedures is needed.

  • If a requirements management tool is used, follow the agreed upon format and complete all appropriate fields proactively (Section 6.9 discusses this more).

  • Implement characteristics of good requirements (see Section 6.7.6.9).

  • Coordinate with teammates to get early feedback and ensure consistency across the team.

  • Identify safety requirements. Most companies find it beneficial to identify requirements that directly contribute to safety. These are requirements with a direct tie to the safety assessment and that support ARP4754A compliance.

  • Document derived requirements and include the reason for their existence (i.e., rationale or justification). (Derived requirements are discussed later.)

  • Include and identify robustness requirements. Robustness is “the degree to which a system continues to function properly when confronted with invalid inputs, defects in connected software or hardware components, or unexpected operating conditions” [4].

    For each requirement, consider whether there are any potential abnormal conditions (e.g., invalid inputs or invalid states) and ensure that there is a defined behavior for each of the conditions.

6.7.6.2 Document Nonfunctional Requirements

Nonfunctional (nonbehavioral) requirements are those that “define the overall qualities or attributes to be exhibited by the resulting software” [12]. Even though they do not describe functionality, it is important to document these requirements, since they are expected by the customer and they drive design decisions. These requirements are important because they explain how well the product will work. They include characteristics such as speed of operation, ease of use, failure rates and responses, and abnormal conditions handling [4]. Essentially, the nonfunctional requirements include constraints that the designers must understand.

When nonfunctional requirements vary by feature or function, they should be specified with the feature or function. When the nonfunctional requirements apply across all features or functions, then they are usually included in a separate section. Nonfunctional requirements should be identified as such, either by a separate section of the requirements or with some kind of attribute, since they will usually not trace down to code and will impact the test strategy. Nonfunctional requirements still need to be verified, but they are often verified by analysis or inspection rather than test, since they may not exhibit testable functionality.

Following are some of the common requirements types that are identified as nonfunctional requirements:

  1. Performance requirements are probably the most common class of nonfunctional requirements. They include information to help designers, for example, response times, computational accuracy, timing expectations, memory requirements, and throughput.

  2. Safety requirements that might not be part of functionality are documented as nonfunctional requirements. Examples include the following:

    1. Data protection—preventing loss of or corruption of data.

    2. Safety regulations—specifying specific regulatory guidance or rules that must be satisfied.

    3. Availability—defining the time the software will be available and fully operational.

    4. Reliability—identifying when some aspect of the software is used to support system reliability.

    5. Safety margins—defining margins or tolerances needed to support safety, for example, timing or memory margin requirements.

    6. Partitioning—ensuring that partitioning integrity is maintained.

    7. Degradation of service—explaining how software will degrade gracefully or act in the presence of a failure.

    8. Robustness—identifying how the software will respond in the presence of abnormal conditions.

    9. Integrity—protecting data from corruption or improper execution.

    10. Latency—protecting against latent failures.

  3. Security requirements may be needed to support safety, ensure system reliability, or protect proprietary information.

  4. Efficiency requirements are a measure of how well the system utilizes processor capacity, memory, or communication [4]. They are closely related to performance requirements but may identify other important characteristics of the software.

  5. Usability requirements define what characteristics are needed to make the software user-friendly; this includes human factors considerations.

  6. Maintainability requirements describe the need to easily modify or correct the software. This includes maintainability during the initial development, during integration, and after the software has gone into production.

  7. Portability requirements address the need to easily move the software to other environments or target computers.

  8. Reusability requirements define the need to use the software for other applications or systems.

  9. Testability requirements describe what capabilities need to be built into the software for test, including systems or software development testing, integration testing, customer testing, aircraft testing, and production testing.

  10. Interoperability requirements document how well the software can exchange data with other components. Specific interoperability standards may apply.

  11. Flexibility requirements describe the need to easily add new functionality to the software, both during the initial development and over the life of the product.

6.7.6.3 Document Interfaces

Interfaces include user interfaces (e.g., in display systems), hardware interfaces (as in communication protocol for a specific device), software interfaces (such as an application programmer interface or library interface), and communication interfaces (e.g., when using a databus or network).

Requirements for interfaces with hardware, software, and databases need to be documented. Oftentimes, the SWRD references an interface control document. In some cases, explicit SWRD requirements, independent standards, or a data dictionary describe the data and control interfaces. The interfaces should be documented in a manner to support the data and control coupling analysis, which is discussed in Chapter 9.

Any interface documents referenced from the requirements need to be under configuration control, since they affect the requirements, testing, system operations, and software maintenance.

6.7.6.4 Uniquely Identify Each Requirement

Each requirement should have a unique tag (also known as a number, label, or identifier). Most organizations use shall to identify requirements. Each shall identifies one requirement and has a tag. Using this approach helps the requirements engineer distinguish between what is actually required and what is commentary or support information.

Some tools automatically assign requirements tags, while others allow manual assignment of the tag. The identification approach should be documented in the standards and closely followed. Once a tag has been used, it should not be reassigned, even if the requirement is deleted. Additionally, it’s important to ensure that each tag only has one requirement. That is, don’t lump multiple requirements together, since this leads to ambiguity and makes it difficult to confirm test completeness.

6.7.6.5 Document Rationale

It is a good practice to include rationale with the requirements, since it can improve the quality of a requirement, reduce the time required to understand a requirement, improve accuracy, reduce time during maintenance, and be useful to educate engineers on the software functionality. The process of writing the rationale not only improves the reader’s comprehension of the requirements but also helps the authors write bet ter requirements. The FAA Requirements Engineering Management Handbook states:

Coming up with the rationale for a bad requirement or assumption can be difficult. Forcing the specifier to think about why the requirement is necessary or why the assumption is being made will often improve the quality of the requirement… Requirements document what a system will do. Design documents how the system will do it. Rationale documents why a requirement exists or why it is written the way it is. Rationale should be included whenever there is something in the requirement that may not be obvious to the reader or that might help the reader to understand why the requirement exists [13].

The handbook goes on to provide recommendations for writing the rationale, including the following [13]:

  • Provide rationale throughout the requirements development to explain why the requirement is needed and why specific values are included.

  • Avoid specifying requirements in the rationale. If the information in the rationale is essential to the required system behavior, it should be part of the requirements and not the rationale.

  • Provide rationale when the reason for a requirement’s existence is not obvious.

  • Include rationale for environmental assumptions upon which the system depends.

  • Provide rationale for values and ranges in each requirement.

  • Keep each rationale short and relevant to the requirement being explained.

  • Capture rationale as soon as possible to avoid losing the train of thought.

6.7.6.6 Trace Requirements to Their Source

Each requirement should trace to one or more parent requirements (the higher level requirements from which the requirement was decomposed). Requirements engineers must ensure that each system requirement allocated to software is fully implemented by the software requirements that trace to it (i.e., its children).

Traceability should be documented as the requirements are written. It is virtually impossible to go back and correct the tracing later. Many requirements management tools provide the capability to include trace data, but the tools do not automatically know the trace relationships. Developers must be disciplined at tracing the requirements as they are developed.

In addition to tracing up, the requirements should be written in such a way that they are traceable down to the low-level requirements and test cases. Section 6.11 provides additional information on traceability.

6.7.6.7 Identify Uncertainties and Assumptions

It is common to have unknown information during requirements definition. Those can be identified with a TBD (to be determined) or some other clear notation. It’s a good practice to include a note or footnote to identify who is responsible for addressing the TBD and when it will be completed. All TBDs should be addressed before the requirements are formally reviewed and before implementation. Likewise, any assumptions should be documented, so that they can be confirmed and verified by the appropriate teams (e.g., systems, hardware, verification, or safety).

6.7.6.8 Start a Data Dictionary

Some projects try to avoid having a data dictionary because it can be tedious to maintain. However, a data dictionary is extremely valuable for dataintensive systems. Most of the data dictionary is completed during design. However, it is beneficial to start documenting shared data (including data meaning, type, length, format, etc.) during requirements definition. The data dictionary helps with integration and overall consistency. It also helps prevent errors caused by inconsistent understanding of data [4]. Depending on the project details, the data dictionary and interface control document may be integrated.

6.7.6.9 Implement Characteristics of Good Requirements

Chapter 2 identified the characteristics of good system requirements. Software requirements should have those same characteristics, which include atomic, complete, concise, consistent, correct, implementation-free, necessary, traceable, unambiguous, verifiable, and viable. Additional suggestions for writing high-quality software requirements are provided here:

  • Use concise and complete sentences with correct grammar and spelling.

  • Use one shall for each requirement.

  • Use an active voice.

  • Emphasize important items using graphics, bolding, sequencing, white space, or some other method.

  • Use terms consistently as identified in the SWRD glossary or definitions section.

  • Avoid ambiguous terms. Examples of ambiguous terms include the following: as a goal, to the extent practical, modular, achievable, sufficient, timely, user-friendly, etc. If such terms are used, they need to be quantified.

  • Write requirements at the appropriate level of granularity. Usually the appropriate level is one that can be tested by one or just a few tests.

  • Keep the requirements at a consistent level of granularity or detail.

  • Minimize or avoid the use of words that indicate multiple requirements, such as unless or except.

  • Avoid using and/or or using the slash (/) to separate two words, since this can be ambiguous.

  • Use pronouns cautiously (e.g., it or they). It is typically better to repeat the noun.

  • Avoid i.e. (which means that is) and e.g. (which means for example) since many people get the meanings confused.

  • Include rationale and background for requirements in a notes or comment field (see Section 6.7.6.5). There is nothing better than a good comment to get inside the author’s head.

  • Avoid negative requirements, since they are difficult to verify.

  • Ensure that the requirements fully define the functionality by looking for omissions (i.e., things that are not specified that should be).

  • Build robustness into the requirements by thinking through how the software will respond to abnormal inputs.

  • Avoid words that sound alike or similar.

  • Use adverbs ending in -ly cautiously (e.g., reasonably, quickly, significantly, and occasionally), since they may be ambiguous.

Leveson emphasizes the importance of complete requirements, when she writes:

The most important property of the requirements specification with respect to safety is completeness or lack of ambiguity. The desired software behavior must have been specified in sufficient detail to distinguish it from any undesired program that might be designed. If a requirements document contains insufficient information for the designers to distinguish between observably distinct behavioral patterns that represent desired and undesired (or safe and unsafe) behavior, then the specific is ambiguous or incomplete [1].

6.7.7 Task 6: Provide Feedback on the System Requirements

Writing software requirements involves scrutiny of the system requirements. The software team often finds erroneous, missing, or conflicting system requirements. Any issues found with the system requirements should be documented in a problem report, communicated to the systems team, and followed up to confirm that action is taken. In many programs, the software team assumes that the systems team fixed the system requirements based on verbal or e-mail feedback; however, in the final throes of certification, it is discovered that the system requirements were not updated. To avoid this issue, the software team should proactively ensure that the system requirements are updated by writing problem reports against the system requirements and following up on each problem report. Failure to follow through on issues could lead to an inconsistency between system requirements and software functionality that may stall the certification process, since the requirements disconnect is considered a DO-178C noncompliance.

6.8 Verifying (Reviewing) Requirements

Once the requirements are mature and stable, they are verified. This normally occurs by performing one or more peer reviews. The purpose of the review process is to catch errors before they are implemented; therefore, it is one of the most important and valuable activities of safety-critical software development. When done properly, reviews can prevent errors, save significant time, and reduce cost. The peer review is performed by a team of one or more reviewers.

To optimize the review process, I recommend two stages of peer reviews: informal and formal. The informal stage is first and helps to mature requirements as quickly as possible. In fact, as previously mentioned, I support the concept of team development, where at least two developers jointly develop the requirements and continuously consult each other and check each other’s work. The goal is to perform frequent informal reviews early on, in order to minimize issues discovered during the formal peer review.

During the formal requirements review(s), reviewers use a checklist (which is typically included in the software verification plan or requirements standards). Based on DO-178C, the following items (as a minimum) are usually included in the checklist and assessed during the requirements review [3]:*

  • Entry criteria identified in the plans for the review have been satisfied. In most cases, this requires release of the system requirements, release of the software requirements standards, release of the software development and verification plans, and configuration control of the software requirements.

  • High-level software requirements comply with the system requirements. This ensures that the high-level requirements fully implement the system requirements allocated to software.

  • High-level software requirements trace to the system requirements. This is a bidirectional trace: all system-level requirements allocated to software should have high-level requirements implementing the system requirements, and all high-level requirements (except for derived requirements) should trace to system-level requirements. Section 6.11 provides additional thoughts on traceability.

  • High-level software requirements are accurate, unambiguous, consistent, and complete. This includes ensuring that inputs and outputs are clearly defined and in quantitative form (including units of measure, range, scaling, accuracy, and frequency of arrival), both normal and abnormal conditions are addressed, any diagrams are accurate and clearly labeled, etc.

  • High-level software requirements conform to the requirements standards. As recommended earlier, the requirements standards should include the attributes of good requirements mentioned earlier. Chapter 5 discussed the requirements standards.

  • High-level software requirements are verifiable, for example, requirements that involve measurement include tolerances, only one requirement per identifier/tag, quantifiable terms, and no negative requirements.

  • High-level software requirements are uniquely identified. As noted earlier, each requirement should have a shall and a unique identifier/tag.

  • High-level requirements are compatible with the target computer. The purpose is to ensure that high-level requirements are consistent with the target computer’s hardware/software features—especially with respect to response times and input/output hardware. Oftentimes, this is more applicable during design reviews than during requirements reviews.

  • Proposed algorithms, especially in the area of discontinuities, have been examined to ensure their accuracy and behavior.

  • Derived requirements are appropriate, properly justified, and have been provided to the safety team.

  • Functional and operational requirements are documented for each mode of operation.

  • High-level requirements include performance criteria, for example, precision and accuracy.

  • High-level requirements include timing requirements and constraints.

  • High-level requirements include memory size constraints.

  • High-level requirements include hardware and software interfaces, such as protocol, formats, and frequency of inputs and outputs.

  • High-level requirements include failure detection and safety monitoring requirements.

  • High-level requirements include partitioning requirements to specify how the software components interact with each other and the software levels of each partition.

During the formal reviews, the requirements may be verified as a whole or as functional groups. If reviews are divided by functionality, there still needs to be a review that examines the consolidated groups for consistency and cohesion.

For a review to be effective, the right reviewers should be assembled. Reviewers should include qualified technical personnel, including software developers, systems engineers, test engineers, safety personnel, software quality assurance engineer, and certification liaison personnel.* It is imperative that every reviewer read and thoroughly understand the checklist items before carrying out the review. If the reviewers are unfamiliar with the checklist, training should be provided with guidance and examples for each of the checklist items.

The comments from the formal review should be documented, categorized (for instance, significant issue, minor issue, editorial comment, duplicate comment, no change), and dispositioned. The commenter should agree with the action taken before the review is closed. The requirements checklist should also be successfully completed prior to closure of the review. More recommendations for the peer review process are included in the following.

6.8.1 Peer Review Recommended Practices

Although not required, most projects use a formal peer review process in order to verify their plans, requirements, design, code, verification cases and procedures, verification reports, configuration index, accomplishment summary, and other key life cycle data. A team with the right members can often find errors that an individual might miss. The same peer review process can be used across the multiple life cycle data items (it isn’t limited to requirements). In other words, the review process can be standardized, so that only the checklists, data to be reviewed, and reviewers change for the actual reviews. This section identifies some of the recommended practices to integrate into the peer review process:

  • Assign a moderator or technical lead to schedule the review, provide the data, make assignments, gather and consolidate the review comments, moderate the review meeting, ensure that the review checklist is completed, make certain that all review comments are addressed prior to closure of the review, etc.

  • Ensure that the data to be reviewed is under configuration management. It may be informal or developmental configuration management, but it should be controlled and the version identified in the peer review records.

  • Identify the data to be reviewed and the associated versions in the peer review record.

  • Identify the date of the peer review, reviewers invited, reviewers who provided comments, and amount of time each reviewer spent on the review. The reviewer information is important to provide evidence of independence and due diligence.

  • Involve the customer, as needed or as required by contract or procedures.

  • Use a checklist for the review and ensure that all reviewers have been trained on the checklist and the appropriate standards for the data under review. The checklists are typically included or referenced in the approved plans or standards.

  • Provide the review package (including data to be evaluated with line or section numbers, checklist, review form, and any other data needed for the review) to the reviewers, and identify required and optional reviewers.

  • Allocate responsibilities for each reviewer (e.g., one person to review traceability, one person to review compliance to standards, and so on). Ensure that required reviewers are covering all aspects of the checklist, reviewers perform their tasks, and reviewers are qualified for the task assigned. If a required reviewer is gone or cannot carry out his or her task, someone else equally qualified may need to perform the review, or the review may need to be rescheduled.

  • Provide instructions to the reviewers (e.g., identify comment due dates, meeting dates, roles, focus areas, open issues, file locations, reference documents).

  • Give reviewers adequate notice and time to perform the review. If a required reviewer needs more time, reschedule the review.

  • Ensure that the proper level of independence is achieved. The DO-178C Annex A tables identify by software level when independence is required. Chapter 10 discusses verification independence.

  • Use qualified reviewers. Key technical reviewers include those who will use the data (e.g., tester and designer) and one or more independent developers (when independence is required). As noted earlier, for requirements reviews, it’s recommended that systems and safety personnel be involved. The review will only be as good as the people performing it, so it pays off over the life of a project to use the best and most qualified people for technical roles. Junior engineers can learn by performing support roles.

  • Invite software quality assurance and certification liaison personnel, as well as any other support personnel needed.

  • Keep the team size to a reasonable number. This is subjective and tends to vary depending on the software level and the significance of the data being reviewed.

  • Provide a method for reviewers to document comments (a spreadsheet or comment tool is typical). The following data are normally entered by the reviewer: reviewer name, document identification and version, section or line number of document, comment number, comment, and comment classification (significant, minor, editorial, etc.).

  • Schedule a meeting to discuss nontrivial comments or questions. Some companies prefer to limit the number of meetings and only use them for controversial topics. If meetings are not held, ensure there is a way to obtain agreement from all reviewers on the necessary actions. In my experience, a brief meeting to discuss technical issues is far more effective and efficient than ongoing e-mail threads.

  • Assuming there is a meeting, give the author of the data time to review and propose a response for each comment prior to the meeting. The meeting time should focus on the nontrivial items that require face-to-face interaction.

  • If a team is spread out geographically, use electronic networking features (e.g., WebEx, NetMeeting, or Live Meeting) and teleconferencing to involve the appropriate people.

  • Limit the amount of time discussing issues. Some companies impose a 2-minute rule; any discussions that require more than 2 minutes are tabled for future discussion. If an item cannot be resolved in the time allocated, set up a follow-on meeting with the right stakeholders. The moderator helps to keep the discussions on track and on schedule.

  • Identify a process to discuss controversial issues, such as an escalation path, a lead arbitrator, or a product control board.

  • Complete the review checklist. Several potential approaches may be used: the checklist(s) may be completed by each team member for their part, by the team during the peer review meeting, or by a qualified reviewer. It is typically not possible to successfully complete the checklist until all of the review comments are addressed.

  • Ensure that all issues are addressed and closed, before the review is closed. If an issue needs to be addressed but cannot be addressed prior to closing the review, a problem report should be generated. The problem report number should be included in the peer review records to ensure that the issue is eventually addressed or properly dispositioned.

  • Break large documents into smaller packages and review high-risk areas first. Once all of the individual packages are reviewed, an experienced engineer or team should perform an integration review to make sure all of the packages are consistent and accurate. That is, look at the data together.

  • Have an organized approach to store and retrieve review records and checklists, since they are certification evidence.

Here are some common issues that arise during the actual implementation of a peer review process, which can be avoided by proper management of the peer reviews:

  • Considering the activity as a check mark, rather than a tool to work out technical issues early, add value to the end product, and save time and money over the life of the project.

  • Not giving reviewers time to thoroughly review the data.

  • Having an overly large review team.

  • Not using qualified and well-trained reviewers.

  • Not closing comments before proceeding to the next phase.

  • Not completing the required checklist completely or promptly.

6.9 Managing Requirements

6.9.1 Basics of Requirements Management

A vital part of software development is requirements management. No matter how thorough the planning and the diligence in writing the requirements, change will happen. An organized requirements management process is essential to managing the inevitable change. Requirements management includes “all activities to maintain the integrity, accuracy, and currency of the requirements agreement as the project progresses” [4].

In order to manage the requirements, the following should be done:

  • Develop requirements to be modifiable. As previously mentioned, modifiability is a characteristic of good requirements. Modifiable requirements are well organized, at the appropriate level of granularity, implementation-free, clearly identified, and traceable.

  • Baseline the requirements. Both the functional and nonfunctional requirements should be baselined. The baseline typically occurs after the requirements have been through the peer review. For large projects, it is useful to have version control of the individual requirements as well as sections of or the entire SWRD.

  • Manage all changes to the baseline. This typically occurs through the problem reporting process and change control board. Changes to requirements are identified, approved by the change control board, implemented, and rereviewed.

  • Update requirements using approved process. The updates to the requirements should use the same requirements process defined in the plans. That is, follow the standards, implement quality attributes, perform reviews, etc. Some companies diligently follow the process the first time around but get lax during the updates. Because of this tendency, external auditors and quality assurance engineers tend to look closely at the thoroughness of modifications.

  • Rereview the requirements. Once changes are implemented, the changes and requirements affected by the changes should be re-reviewed. If multiple requirements are changed, a team review may be appropriate. If the number of requirements changed or impacted is small and straightforward, the review may be performed by an individual. The appropriate level of independence is still needed.

  • Track status. The status of each requirement should be tracked. The typical states of the requirements changes are: proposed, approved, implemented, verified, deleted, or rejected [9]. Oftentimes, the status is managed through the problem reporting process in order to avoid having two status systems. The problem reporting process typically includes the following states: open (requirements change has been proposed), in-work (change has been approved), implemented (change has been made), verified (change has been reviewed), cancelled (change not approved), and closed (change fully implemented, reviewed, and under configuration management).

Change management is further discussed in Chapter 10.

6.9.2 Requirements Management Tools

Most companies use a commercially available requirements management tool to document and help manage their requirements; however, some do have their own homegrown tool. Customers may mandate a specific tool in order to promote a consistent requirements management approach at all hierarchical levels. Whatever requirements management tool is selected, it should have the capability to do the following, as a minimum:

  • Easily add requirements attributes or fields

  • Export to a readable document format

  • Accommodate graphics (e.g., tables, flowcharts, user interface graphics)

  • Baseline requirements

  • Add or delete requirements

  • Handle multiple users in multiple geographic locations

  • Document comments or rationale for requirements

  • Trace up and down

  • Generate trace reports

  • Protect from unauthorized change (e.g., password)

  • Be backed up

  • Reorder requirements without renumbering them

  • Manage multiple levels of requirements (such as system, high-level software, low-level software)

  • Add levels of requirements if needed

  • Support multiple development programs

Typical requirements fields or attributes that are included in the requirements management tool are as follows:

  • Requirementsidentification—aunique identifier/tag fortherequirement.

  • Requirements applicability—if multiple projects are involved, some requirements may or may not apply.

  • Requirements description—states the requirement.

  • Requirements comment—explains important things about the requirement such as rationale and related requirements. For derived requirements, this includes rationale for why the derived requirement is needed.

  • Status—to identify the status of each requirement (such as approved by change control board, in-work, implemented, verified, deleted, or rejected).

  • Change authority—identifies the problem report, change request number, etc. used to authorize the requirement implementation or change.

  • Trace data—documents trace up to parent requirements, down to child requirements, and out to test cases and/or procedures.

  • Special fields—identify safety requirements, derived requirements, robustness requirements, approval status, etc.

A document or spreadsheet can be used for the requirements documentation. However, the more complex the project and larger the development team, the more benefit there is to using a requirements management tool.

If a requirements management tool is used, the following are important to have:

  • Technical support and training. The developers should be trained on how to use the tool properly.

  • Project-specific instructions to explain how to use the tool properly in the given environment (such information is often included in the standards or a process manual).

  • Frequently asked questions and examples to help address common issues that will be encountered.

Requirements management tools can be powerful. They can help manage requirements and versions, support larger teams, facilitate traceabil ity, track status of requirements, support use of requirements on multiple projects, and much more. However, even with a requirements management tool, good requirements change management is needed. The tool will not “compensate for lack of process, discipline, experience, or understanding” [4].

6.10 Requirements Prototyping

Alan Davis explains: “Prototyping is the technique of constructing a par tial implementation of a system so that customers, users, or developers can learn more about a problem or solution to that problem” [12]. Prototypes can help to get customer feedback on key functionality, explore design options to determine feasibility, mature the requirements, identify ambiguous or incomplete requirements, minimize requirements misunderstandings, and improve the requirements robustness.

However, when I hear the word prototype, I tend to cringe (at least a little). Many projects not only use the prototype to help with requirements development but also try to salvage the prototype code. This is rarely successful. Prototype code is often developed quickly in order to prove a concept. It typically is not robustly designed and does not comply with the development standards. Prototyping can be a great way to mature the requirements, get customer feedback, and determine what does and doesn’t work. The problem arises when the customer or management wants to use the prototype code for integration and certification. This can lead to reverse engineering the design and requirements and considerable rework of the code. It ends up taking longer and having more code issues than discarding the code and starting over.

Despite this, there are exceptions and prototyping can be successfully used—especially when prototype code is planned and part of an organized process, and not just a last-minute idea to recover lost time. There are two common approaches to successful prototyping [4,12]:

  1. Throwaway prototype: This is a quick prototype developed without firm requirements and design in order to determine functional feasibility, obtain customer feedback, and identify missing requirements. The throwaway prototype is used to mature the requirements and then discard the code. In order to avoid the temptation to keep the prototype, (1) only implement part of the functionality, (2) establish a firm agreement up front that the code will be discarded, and (3) use a different environment. Without these protections, some customers or project managers will be tempted to try to use the prototype for certification. Throwaway prototypes are not built to be robust or efficient, nor are they designed to be maintainable. Keeping a throwaway prototype is a big mistake.

  2. Evolutionary prototype: This approach is entirely different. It is developed with the intent to use the code and supporting data later. The evolutionary prototype is usually a partial implementation of a key functionality. Once the functionality is proven, it is cleaned up and additional functionality is added. The evolutionary prototype is intended to be used in the final product; therefore, it uses a rigorous process, considers the quality attributes of requirements and design, evaluates the multiple design options, implements robustness, follows requirements and design standards, and uses code comments and coding standards. The prototype forms the foundation of the final product and is also closely related to the spiral life cycle model.

Both prototype approaches may be used, but should be explained in the plans, agreed with the certification authority, and implemented as agreed.

6.11 Traceability

DO-178C requires bidirectional traceability between system and software high-level requirements, software high-level requirements and software low-level requirements, software low-level requirements and code, requirements and test cases, test cases and test procedures, and test procedures and test results [3]. This section examines (1) the importance and benefits of traceability, (2) top-down and bottom-up traceability, (3) what DO-178C says about traceability, and (4) trace challenges to avoid.

6.11.1 Importance and Benefits of Traceability

Traceability between requirements, design, code, and test data is essential for compliance to DO-178C objectives. There are numerous benefits of good traceability.

Benefit 1: Traceability is needed to pass a certification authority audit. Traceability is vital to a software project and DO-178C compliance. Without good traceability, the entire claim of development assurance falls apart because the certification authority is not assured. Many of the objectives and concepts of DO-178C and good software engineering build upon the concept of traceability.

Benefit 2: Traceability provides confidence that the regulations are satisfied. Traceability is important because, when it is done properly, it ensures that all of the requirements are implemented and verified, and that only the requirements are implemented. This directly supports regulatory compliance, since the regulations require evidence that intended functionality is implemented.

Benefit 3: Traceability is essential to change impact analysis and maintenance. Since changes to requirements, design, and code are essential to software development, engineers must consider how to make the software and its supporting life cycle data changeable. When a change occurs, traceability helps to identify what data are impacted, need to be updated, and require reverification.

Benefit 4: Traceability helps with project management. Up-to-date trace data help project managers know what has been implemented and verified and what remains to be done.

Benefit 5: Traceability helps determine completion. A good bidirectional trace scheme enables engineers to know when they have completed each data item. It also identifies data items that have not yet been implemented or that were implemented without a driver. When a phase (e.g., design or test case development) is completed, the trace data show that the software life cycle data are complete and consistent with the previous phase’s data and are ready to be used as input to the next phase.

6.11.2 Bidirectional Traceability

In order to achieve the all-requirements-and-only-requirements implementation goal, two kinds of traceability are needed: forward traceability (top-down) and backward traceability (bottom-up). Figure 6.3 illustrates the bidirectional traceability concepts required by DO-178C.

Images

Figure 6.3 Bidirectional traceability between life cycle data.

Bidirectional traceability doesn’t just happen, it must be considered throughout the development process. The best way to enforce it is to implement and check the bidirectional traceability at each phase of the development effort, as noted here:

  • During the review of the software requirements, verify the top-down and bottom-up tracing between the system requirements and highlevel software requirements.

  • During the review of the design description, verify the bidirectional tracing between the high-level software requirements and the lowlevel software requirements.

  • During the code reviews, verify the bidirectional tracing between the low-level software requirements and the source code.

  • During the review of the test cases and procedures, verify the bidirectional tracing between the test cases and requirements (both highlevel and low-level) and between the test cases and test procedures.

  • During the review of the test results, verify the bidirectional tracing between the test results and the test procedures.

Both directions should be considered during the reviews. Just because one direction is complete doesn’t necessarily mean the trace is bidirectional. For example, all system requirements allocated to software may trace down to high-level software requirements; however, there may be some high-level software requirements that do not trace up to a system requirement. Derived requirements should be the only requirements that don’t have parents.

The tracing activity shouldn’t just look for the completeness of the tracing but should also evaluate the technical accuracy of the tracing. For example, when evaluating the trace between a system requirement and its children (high-level software requirements), consider these questions:

  • Do these high-level software requirements completely implement the system requirement?

  • Is there any part of the system requirement not reflected in the highlevel software requirements?

  • Is the relationship between these requirements accurate and complete?

  • Are there any missing traces?

  • If the high-level software requirements trace to multiple system requirements, is the relationship of the requirements group accurate and complete?

  • Is the granularity for each level of the requirements appropriate? For example, is the ratio of system to high-level software requirements appropriate? There isn’t a magic number, but a significant number of requirements with a 1:1 or 1:>10 might indicate a granularity issue.

  • If there are many-to-many traces, are they appropriate? An overabundance of many-to-many traces (i.e., children tracing to many parents and parents tracing to many children) may indicate a problem (this is discussed in Section 6.11.4).

6.11.3 DO-178C and Traceability

DO-178B identified traceability as a verification activity but was somewhat vague about how that trace information should be documented. Most applicants include trace information as part of their verification report; however, some include it in the developed data itself (i.e., in the requirements, design, code, and test cases and procedures). DO-178C is still flexible regarding where the trace information is documented; however, it does require an artifact called trace data during the development of requirements, design, code, and tests.

DO-178C section 5.5 identifies the trace activities that occur during software development. It explicitly requires bidirectional tracing between (1) system requirements allocated to software and the high-level software requirements, (2) high-level software requirements and low-level software requirements, and (3) low-level software requirements and source code [3]. DO-178C Table A-2 identifies trace data as the evidence of the trace activity and an output of the development process.

Similarly, DO-178C section 6.5 explains the trace activities during the verification process and requires bidirectional tracing between (1) soft ware requirements and test cases, (2) test cases and test procedures, and (3) test procedures and test results. DO-178C Table A-6 identifies trace data as an output of the testing process.

DO-178B did not include the term bidirectional traceability, although it alluded to it and essentially required it. DO-178B section 6 discussed forward traceability (sections 6.3.1.f, 6.3.2.f, 6.3.4.e, and 6.4.4.1), whereas the objectives in DO-178B Tables A-3 (objective 6), A-4 (objective 6), A-5 (objective 5), and A-7 (objectives 3 and 4) alluded to backward traceability. Thus, both have been required by the certification authorities. However, DO-178C is more explicit in this area. DO-178C specifically identifies the need for bidirectional traceability, as well as the development of trace data.

6.11.4 Traceability Challenges

Like everything else in software engineering, implementing good traceability has its challenges. Some of the common challenges are as follows:

Challenge 1: Tracing proactively. Most software engineers love to solve problems and create design or code. However, few of them enjoy the paperwork and record keeping that goes with the job. In order to have accurate and complete traceability, it is essential that traceability be documented as the development takes place. In other words, as the software high-level requirements are written, the tracing to and from the system requirements should be noted; as the design is documented, the tracing to and from high-level requirements should be written down; as the code is being developed, the tracing to and from the low-level requirements should be documented; etc.

If tracing doesn’t occur proactively by the author of the data, it is difficult for someone not as familiar with the data to do it later because he or she may not know the thought-process, context, or decisions of the original developers, and may be forced to make guesses or assumptions (which may be wrong). Additionally, if the tracing is done after the fact, there tend to be holes in the bidirectional traceability. To say it another way, some requirements may only be partially implemented and some functions may be implemented that aren’t in the requirements. Instead of properly fixing the requirements, the after-the-fact, cleanup engineers may partially trace (to make it look complete), call many requirements derived (because they don’t have parents), or trace to general requirements (ending up with a one-to-toomany or many-to-many dilemma). This seems to especially be the case when inexperienced engineers are used to perform the after-the-fact tracing.

Challenge 2: Keeping the trace data current. Some projects do very well at creating the initial trace data; however, they fail to update it when changes are made. Traceability should be evaluated and appropriately updated anytime the data are modified. Current and accurate trace data are critical to requirements management decisions and the change impact assessments (which are discussed in Chapter 10).

Challenge 3: Doing bidirectional tracing. It can be tempting to think everything is complete, when the traceability in one direction is complete. However, as previously noted, just because the forward tracing is complete doesn’t mean the backward trace is, and vice versa. The trace data must be considered from both top-down (forward) and bottom-up (backward) perspectives.

Challenge 4: Many-to-many traces. This occurs when parent requirements trace to multiple children and children requirements trace to multiple parents. While there are definitely situations where many-to-many tracing is accurate and appropriate, an overabundance of many-to-many traces tends to be the symptom of a problem. Oftentimes, the problem is that the requirements are not well organized or that the tracing was done after the fact. While there are no certification guidelines against many-to-many traces, they can be very confusing to the developers and certifying authority, and they are often very difficult to justify and maintain. Many-to-many tracing should be minimized as much as possible.

Challenge 5: Deciding what is derived and what is not. Derived requirements can be a challenge. More than once I’ve evaluated a project where the derived flag was set because there were missing higher level requirements and the project didn’t want to add requirements. Derived requirements should be used cautiously. They are not plugs for missing higher level requirements, but represent design details that aren’t significant at the higher level. Derived requirements should not be added to compensate for missing functionality at the higher level. One technique to help avoid inaccurate classification of derived requirements is to justify why each requirement is needed and why it is classified as derived.* If the requirement can’t be explained or justified, it may not be needed or a higher level requirement may be missing. Keep in mind that all derived requirements need to be evaluated by the safety team, and all code must be traceable to requirements—there is not a category called derived code.

Challenge 6: Weak links. Oftentimes there will be some requirements that are debatable as to whether they are traced or derived. They may be related to a higher level requirement but not the direct result of the higher level requirement. If one decides to include the trace, it is helpful to include rationale about the relationship between the higher and lower level requirements. For lack of a better description, I call these debatable traces weak links. By providing a brief note about why the weak links are included, it helps all of the users of the requirements better understand the connection and to evaluate the impact of future changes. The notes also help the certification and software maintenance efforts by explaining what may not be obvious relationships. The explanation does not need to be verbose; a short sentence or two is usually quite adequate. Even if the requirement is classified as derived, it’s still recommended to mention the relationship, since it may help change impact analysis and change management later (i.e., it supports modifiability).

Challenge 7: Implicit traceability. Some projects use a concept of implicit traceability. For example, the tracing may be assumed by the naming convention or the document layout. Such approaches have been accepted by the certification authorities. The new challenge is DO-178C’s requirement for trace data. Implicit tracing is built in and doesn’t result in a separate trace artifact. Therefore, implicit tracing should be handled cautiously. Here are a few suggestions:

  • Include the trace rules in the standards and plans, so that developers know the expectations.

  • Document the reviews of the tracing well to ensure that the implicit tracing is accurate and complete.

  • Identify the approach in the software plans and get the certification authority’s buy-in.

  • Be consistent. Do not mix implicit and explicit tracing in a section (unless well documented).

Challenge 8: Robustness testing traces. Sometimes developers or testers argue that robustness testing does not need to be traceable to the requirements, since they are trying to break the software rather than prove intended functionality. They contend that by limiting the testers to just the requirements, they might miss some weakness in the software. As will be discussed in Chapter 9, the break-it mentality is important for effective software testing. Testers should not limit themselves to the requirements when they are thinking through their testing effort. Testers should, however, identify when the requirements are incomplete, rather than create robustness tests that don’t trace to requirements. Frequently, the testers identify missing scenarios that should be reflected in the requirements. It’s highly recommended that the testers participate in requirements and design reviews to proactively identify potential requirements weaknesses. Likewise, developers may want to be involved in test reviews to quickly fill any requirements gaps and to ensure that testers understand the requirements.

References

1. N. Leveson, Safeware: System Safety and Computers (Reading, MA: Addison-Wesley, 1995).

2. IEEE, IEEE Standard Glossary of Software Engineering Terminology, IEEE Std-6101990 (Los Alamitos, CA: IEEE Computer Society Press, 1990).

3. RTCA DO-178C, Software Considerations in Airborne Systems and Equipment Certification (Washington, DC: RTCA, Inc., December 2011).

4. K. E. Wiegers, Software Requirements, 2nd edn. (Redmond, WA: Microsoft Press, 2003).

5. D. Leffingwell, Calculating the return on investment from more effective requirements, American Programmer 10(4), 13–16, 1997.

6. Standish Group Study (1995) referenced in I. F. Alexander and R. Stevens, Writing Better Requirements (Harlow, U.K.: Addison-Wesley, 2002).

7. D. L. Lempia and S. P. Miller, Requirements Engineering Management Findings Report, DOT/FAA/AR-08/34 (Washington, DC: Office of Aviation Research, June 2009).

8. SearchSoftwareQuality.com, Software quality resources. http://searchsoftwarequality.techtarget.com/definition/use-case (accessed on 5/1/2012).

9. K. E. Wiegers, More about Software Requirements (Redmond, WA: Microsoft Press, 2006).

RTCA DO-248C, Supporting Information for DO-178C and DO-278A (Washington, DC: RTCA, Inc., December 2011).

11. Certification Authorities Software Team (CAST), Merging high-level and lowlevel requirements, Position Paper CAST-15 (February 2003).

12. A. M. Davis, Software Requirements (Upper Saddle River, NJ: Prentice-Hall, 1993).

13. D. L. Lempia and S. P. Miller, Requirements Engineering Management Handbook, DOT/FAA/AR-08/32 (Washington, DC: Office of Aviation Research, June 2009).

Recommended Readings

1. K. E. Wiegers, Software Requirements, 2nd edn. (Redmond, WA: Microsoft Press, 2003). While not written for safety-critical software, this book is an excellent resource for requirements authors.

2. D. L. Lempia and S. P. Miller, Requirements Engineering Management Handbook, DOT/FAA/AR-08/32 (Washington, DC: Office of Aviation Research, June 2009). This handbook was sponsored by the FAA and provides valuable guidelines for those writing safety-critical software requirements.

*A requirements engineer will likely have other responsibilities, but this chapter focuses on his or her role as a requirements author; hence, the term requirements engineer is used throughout this chapter.

*A use case is a methodology used in analysis to identify, clarify, and organize requirements. Each use case is composed of a set of possible sequences of interactions between the software and its users in a particular environment and related to a certain goal. “A use case can be thought of as a collection of possible scenarios related to a particular goal, indeed, the use case and goal are sometimes considered to be synonymous” [8]. Use cases can be used to organize functional requirements, model user interactions, record scenarios from events to goals, describe the main flow of events, and describe multiple levels of functionality [8].

*Added brackets for clarification.

*CAST is a team of international certification authorities who strive to harmonize their positions on airborne software and aircraft electronic hardware in CAST papers.

*These are based on DO-178C, sections 6.3.1 and 11.9, unless otherwise noted.

Per DO-178C section 8.1.c.

*Normally quality and certification personnel participate at their discretion. Additionally, safety may review the requirements without participating in the review meeting. The mandatory reviewers should be clearly specified in the plans.

*DO-178C section 5.1.2.h identifies this as an activity.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.126.199