5 Software Planning

Acronym

CC1 control category #1
CC2 control category #2
COTS commercial off-the-shelf
CRC cyclic redundancy check
FAA Federal Aviation Administration
MISRA Motor Industry Software Reliability Association
PQA product quality assurance
PQE product quality engineer
PR problem report
PSAC Plan for Software Aspects of Certification
RTOS real-time operating system
SAS Software Accomplishment Summary
SCI Software Configuration Index
SCM software configuration management
SCMP Software Configuration Management Plan
SDP Software Development Plan
SLECI Software Life Cycle Environment Configuration Index
SOI stage of involvement
SQA software quality assurance
SQAP Software Quality Assurance Plan
SVP Software Verification Plan

5.1 Introduction

You’ve probably heard the saying: “If you fail to plan, you plan to fail.” Or, maybe you’ve heard another one of my favorites: “If you aim at nothing, you’ll hit it every time.” Good software doesn’t just happen—it takes extensive planning, as well as well-qualified personnel to execute and adapt the plan.

The DO-178C planning process involves the development of project-specific plans and standards. DO-178C compliance (and development assurance in general) requires that plans are both documented and followed. If the plans are written to address the applicable DO-178C objectives, then following them will ensure compliance to the objectives. Hence, DO-178C compliance and certification authority approval hinges on a successful planning effort.

DO-178C identifies five plans and three standards and explains the expected contents of each document. While most organizations follow the five-plans-and-three-standards suggestion, it is acceptable to package the plans and standards as best fits the needs of the organization. Although not recommended, the developer could combine all the plans and standards in a single document (keep in mind that in this case, the entire document would need to be submitted to the certification authority). Using a nontraditional planning structure can create some challenges on the first exposure to the certification authority and may require additional submittals; however, it is acceptable as long as all of the necessary topics identified in DO-178C are adequately discussed. Regardless of how the plans and standards are packaged, the recommended contents from DO-178C sections 11.1 through 11.8 need to be addressed, and the documents must be consistent.

5.2 General Planning Recommendations

Before explaining each of the five plans, let’s consider some general planning recommendations.

Recommendation 1: Ensure that the plans cover all applicable DO-178C objectives and any applicable supplement objectives.* The plans should be written so that the team will comply with all of the applicable objectives when they execute the plans. In order to ensure this, a mapping between the DO-178C (and applicable supplements) objectives and the plans is helpful. When completed, the mapping is often included in an appendix of the Plan for Software Aspects of Certification (PSAC). I recommend presenting the mapping as a four-column table—each column is described here:

  1. Table/Objective #: Identifies the DO-178C (and applicable supplements) Annex A table and objective number.

  2. Objective Summary: Includes the objective summary, as it appears in Annex A of DO-178C and/or the applicable supplements.

  3. PSAC Reference: Identifies sections in the PSAC that explain how the objective will be satisfied.

  4. Other Plans Reference: Identifies sections in the team’s plans (e.g., Software Development Plan [SDP], Software Verification Plan [SVP], Software Configuration Management Plan [SCMP], and Software Quality Assurance Plan [SQAP]) that explain the detailed activities to satisfy the objective.

Recommendation 2: Write the plans and standards in such a way that they can be followed by the teams implementing them. The target audience for the PSAC is the certification authority; however, the intended audience for the other plans and the standards is the teams (e.g., development team, verification team, and quality assurance team) that will execute the plans. They should be written at the appropriate level for the team members to understand and properly execute them. If it is an experienced team, the plans may not require as much detail. However, a less experienced team or a project involving extensive outsourcing typically requires more detailed plans.

Recommendation 3: Specify what, how, when, and who. In each plan, explain what will be done, how it will be done, when it will be done (not the specific date, but when in the overall progression of activities), and who will do it (not necessarily the specific names of engineers, but the teams who will perform the tasks).

Recommendation 4: Complete the plans and put them under configuration management before the software development begins. Unless plans are finalized early in the project, it becomes difficult to establish the intended direction, find the time to write them later, and convince the certification authority that they were actually followed. Even if plans are not released before the development begins, they should be drafted and put under configuration management. The formalization (i.e., review and release) should occur as soon as possible. Any changes from the draft versions to the released versions of the plans should be communicated to the teams to ensure they update data and practices accordingly.

Recommendation 5: Ensure that each plan is internally consistent and that there is consistency between the plans. This seems like common sense, but inconsistency is one of the most common problems I discover when reviewing plans. When plans are inconsistent, the efforts of one team may under mine the efforts of another team. If the SDP says one thing and the SVP says another, the teams may make their own decisions on which to follow. There are several ways to ensure consistency among the plans, including using common authors and reviewers. One of the most effect ways I’ve found to create consistent plans is to gather the technical leads and spend some time collectively determining the software life cycle from beginning to end, including the activities, entry criteria, exit criteria, and output for each phase. A large white board or a computer projection to capture the common vision works great. Once the life cycle is documented and agreed upon, then the plans can be written.

Recommendation 6: Identify consistent transition criteria for processes between plans. Careful attention should be paid to defining consistent transition criteria between the processes in all of the plans, particularly the development and verification plans.

Recommendation 7: If developing multiple software products, company-wide planning templates may be helpful. The templates can provide a starting point for project-specific plans. It is best to create the templates using a set of plans that have been used on at least one successful project and that have implemented lessons learned. I recommend that the templates be regularly updated based on lessons learned and feedback from certification authorities, customers, designees, other teams, etc.

Recommendation 8: Involve certification experts (such as authorized designees*) as early in the planning process as possible. Getting input from a designee or someone with certification experience can save time and prevent unnecessary troubles later in the project.

Recommendation 9: Obtain management buy-in on the plans. Effective project execution is dependent on management’s understanding and support of the plans. There are many ways to encourage management buy-in; one of the most effective is to involve management in the planning process, as authors or reviewers. Once the plans are written, management will need to establish a detailed strategy to implement the plans (e.g., identify the necessary resources); the success of this strategy depends on their understanding of the plans.

Recommendation 10: Include appropriate level of details in the plans. Rather than including detailed procedures in the plans, the procedures can be located in an engineering manual or in work instructions. The plans should identify the procedures and provide a summary of the procedures, but the details do not have to be in the plans themselves. For example, structural coverage analysis and data and control coupling analyses normally require some specific procedures and instructions; these procedures can be packaged somewhere besides the SVP. This approach allows some flexibility to choose the best solution when it comes time to actually execute the SVP. It is important that the plans clearly explain what procedures apply, how the procedures will be configuration controlled, how modifications will occur, and how any modifications will be communicated to the team.

Recommendation 11: Brief the certification authority prior to submitting the PSAC. In addition to submitting the PSAC and possibly the other plans to the certification authority, it is valuable to provide a presentation to the certification authority on the planned approach. This is particularly recommended for new or novel products or for companies who may not have a strong working relationship with their local certification authority. Oftentimes, the verbal discussion will identify issues that need to be resolved prior to submitting the plans to the certification authority.

Recommendation 12: Disperse plans to the teams. Once the plans are mature, make sure the teams understand their content. I am amazed at how many projects I audit where the engineers don’t know where to find the plans or what the plans say. Since engineers are not always thrilled with documentation, I recommend that training sessions be mandatory for all team members, as well as reading assignments for their specific roles. Be sure to keep accurate training records, since some certification authorities may ask to see such evidence.

Recommendation 13: Develop the plans to be changeable. A forward-thinking project realizes that the plans may change as the project progresses; therefore, it is wise to document the process for changing the plans in the plans themselves. DO-178C section 4.2.e indicates that the PSAC should include an explanation of the means to revise software plans throughout the project.

Sometimes the planned processes do not work as expected. Improvements or a complete overhaul to the process may be needed. The plans need to be updated to reflect the modifications. During the planning process, there should be thought given to the process for updating plans in a timely manner. This will make it easier when it actually happens.

Sometimes it is tempting to simply document changes in the Software Accomplishment Summary (SAS)* rather than updating the plans. For changes that happen late in the process, this might be acceptable. However, there are some risks. First, the certification authority may not allow it, since many authorities prefer to always have the plans current. Second, the next team that uses the plans (either for updating the software or for a starting point for the next project) may not know about the changes. Therefore, if plans are not updated, the changes should be clearly documented and agreed upon with the certification authority. The changes should also be formally communicated to the engineering teams and documented in problem report(s) against the plans to ensure they are not overlooked.

Recommendation 14: Plan for postcertification process. In addition to changes during the initial certification effort, it is recommended that the planning structure accommodate the postcertification process. Questions to consider are the following: Will postcertification changes require a new PSAC? Or will the change be described in a change impact analysis that is an addendum to the PSAC? Or will changes be described in an appendix of the PSAC? Or will some other packaging approach be used?

If the postcertification process is not considered during the initial certification, it may take more time later and will likely result in an updated PSAC being required for the change, no matter how insignificant the change may seem. It’s worth noting that if the plans are kept up to date during the initial development effort, it makes postcertification changes much easier.

The remainder of this chapter explains each of the five plans and three standards mentioned in DO-178C and includes some recommendations to consider while developing them.

5.3 Five Software Plans

This section explains the expected contents of the five plans and provides suggestions to consider. Several technical topics that may be unfamiliar are introduced in this chapter and will be further explained in future chapters.

5.3.1 Plan for Software Aspects of Certification

The PSAC is the one plan that is always submitted to the certifica tion authority. It is like a contract between the applicant and the certification authority; therefore, the earlier it is prepared and agreed upon, the better. A PSAC that is not submitted until late in the project introduces risk to the project. The risk is that a team may get to the end of a project only to find out that the processes and/or data are not compliant. This can cause schedule and budget slips, considerable rework, and increased scrutiny by the certification authority.

The PSAC provides a high-level description of the overall project and explains how the DO-178C (and applicable supplements) objectives will be satisfied. It also provides a summary of the other four plans. Since the PSAC is often the only plan submitted to the certification authority, it should stand alone. It doesn’t need to repeat everything in the development, verification, configuration management, and quality assurance plans, but it should provide an accurate and consistent summary of those plans. Occasionally, I come across a PSAC that merely points to other documents (I call it a pointer PSAC). Instead of summarizing the development, verification, configuration management, and software quality assurance (SQA) processes, it simply points to the other plans. This certainly reduces redundant text. However, it also means that the other plans will most likely need to be submitted to the certification authority in order for the authority to fully understand the processes. Ideally, the PSAC includes enough detail that the certification authority understands the processes and is able to make an accurate judgment about the compliance, but not so much detail that it repeats the contents of the other plans.

The PSAC should be written clearly and concisely. I have reviewed PSACs that repeated the same thing multiple times and that actually copied sections of DO-178B. The clearer the document is, the less confusing it will be and the more likely it will be approved by the certification authority.

DO-178C section 11.1 provides a summary of the expected PSAC contents and is often used as the outline for the PSAC. Following are the typical sections of the PSAC, along with a brief summary of the contents of each section [1]:

  • System overview: This section explains the overall system and how the software fits into the system. It is normally a couple of pages long.

  • Software overview: This section explains the intended functionality of the software, as well as any architectural concerns. It, too, consists of a couple of pages. Since the plan is for software, it is important to explain the software that will be developed, not just the system.

  • Certification considerations: This section explains the means of compliance. If the DO-178C supplements are being used, this is a good place to explain what supplement will apply to what part of the software. Additionally, this section usually summarizes the safety assessment to justify the assigned software levels. Even for level A software, it is important to detail what drives the decision and to explain if any additional architectural mitigation is needed. Many projects also explain their plans for supporting stage of involvement (SOI) audits in this section of the PSAC. SOI audits are explained in Chapter 12.

  • Software life cycle: This section describes the phases of the software development and the integral processes and is typically a summary of the other plans (SDP, SVP, SCMP, and SQAP).

  • Software life cycle data: This section lists the life cycle data to be developed during the project. Document numbers are normally assigned, although there may be an occasional TBD or XXX. Frequently, this section includes a table that lists the 22 data items from DO-178C section 11 and includes the document names and numbers. It is also the norm to include an indication if the data will be submit ted to the certification authority or just available. Some applicants identify whether the data will be treated as control category #1 (CC1) or control category #2 (CC2) and if the data will be approved or recommended for approval by the designees. As an alternative, the CC1/CC2 information may be in the SCMP. However, if a company-wide SCMP is used, the PSAC may identify the project-specific configuration management details. CC1 and CC2 are discussed in Chapter 10 of this book.

  • Schedule: This section includes the schedule for the software development and approval. I have not yet seen one of these schedules actually met; therefore, the question often arises: “Why do we need the schedule if no one ever meets it?” The purpose is to help both the project and the certification authority plan their resources. Any changes to the schedule throughout the project should be coordinated with the approving authority (this does not require an update to the PSAC, but if the PSAC is updated for other purposes, the schedule should also be updated). Ordinarily, the schedule in the PSAC is relatively high-level and includes the major software milestones, such as when: plans will be released, requirements will be completed, design will be completed, code will be completed, test cases will be written and reviewed, tests will be executed, and SAS will be written and submitted. Some applicants also include SOI audit readiness dates in their schedule.

  • Additional considerations: This is one of the most important sections of the PSAC because it communicates any special issues of which the certification authority needs to be aware. In certification it is important to make every effort to avoid surprises. Documenting all additional considerations clearly and concisely is a way to minimize surprises and to obtain agreement with the certification authority. DO-178C includes a nonexhaustive list of additional consideration topics (such as, previously developed software, commercial off-theshelf [COTS] software, and tool qualification). If there is anything about your project that might be considered out of the ordinary, this is the place to disclose it. This might include the use of a partitioned real-time operating system (RTOS), an offshore team, or automation. Additionally, any planned deactivated code or option-selectable software should be explained. Although DO-178C doesn’t require it, I recommend that the additional considerations section of the PSAC include a list of all tools that will be used on the project, with a brief description of how the tool will be used and justification for why the tool does or does not need to be qualified. Disclosing such information during planning can help prevent some late discoveries; particularly if a tool should be qualified but has not been identified as one that requires qualification.

As noted in Recommendation #1 earlier, it is useful to include a listing of all the applicable DO-178C (and applicable supplements) objectives in an appendix of the PSAC, along with a brief explanation of how each objective will be satisfied and where in the plans each objective is addressed. The designees and certification authority appreciate this information because it provides evidence that the project thoroughly considered the DO-178C compliance details. As a side note, if you opt to include the objectives mapping in your PSAC, please ensure that it is accurate. Since designees and certification authorities like this information, they typically read it; having it accurate and complete helps to build their confidence.

5.3.2 Software Development Plan

The SDP explains the software development, including requirements, design, code, and integration phases (DO-178C Table A-2). Additionally, the SDP often briefly explains the verification of requirements, design, code, and integration (DO-178C Tables A-3 through A-5). The SDP is written for the developers who will write the requirements, design, and code and perform the integration activities. The SDP should be written so that it guides the developers to successful implementation. This means it needs to be detailed enough to provide them good direction, but not so detailed that it limits their ability to exercise engineering judgment. This is a delicate balance. As noted in Recommendation #10, there may be more detailed procedures or work instructions. If this is the case, the SDP should clearly explain what procedures apply and when they apply (i.e., the SDP points to the procedures rather than including the details in the plan). Occasionally, more flexibility may be needed for the detailed procedures; for example, if the procedures are still being developed after the plans are released. If this is the case, the SDP ought to explain the process for developing and controlling the procedures; however, care should be taken when using this approach, since it can be difficult to ensure that all engineers are following the right version of the procedures.

DO-178C section 11.2 identifies the preferred contents of the SDP. The SDP includes a description of three major items: (1) the standards used for development (occasionally, the standards are even included in the plan), (2) the software life cycle with an explanation of each of the phases and criteria for transitioning between phases, and (3) the development environment (the methods and tools for requirements, design, and code, as well as the intended compiler, linker, loader, and hardware platforms). Each is explained in the following:

  1. Standards: Each project should identify standards for requirements, design, and code. These standards provide rules and guidelines for the developers to help them write effective requirements, design, and code. The standards also identify constraints to help developers avoid pitfalls that could negatively impact safety or software functionality. The standards should be applicable to the methodology or language used. The SDP typically references the standards, but in some situations, the standards may be included in the SDP (this sometimes occurs for companies that do limited software development or that have small projects). The three development standards are discussed later in this chapter.

  2. Software life cycle: The SDP identifies the intended life cycle for the software development. This is typically based on a life cycle model.

    In addition to identifying the life cycle model by name, it is recommended that the model be explained, since not all life cycle models mean the same to everyone. A graphic of the life cycle and the data generated for each phase is helpful. Some of the life cycle models that have been successfully used on certification projects are waterfall, iterative waterfall, rapid prototyping, spiral, and reverse engineering. I highly recommend avoiding the big bang, tornado, and smoke-and-mirrors life cycle models.*

    Unfortunately, some projects identify one life cycle model in their plans but actually follow something else. For example, projects sometimes claim that they use waterfall because they believe that is what DO-178C requires and what the certification authority prefers. However, DO-178C does not require waterfall and to claim waterfall without actually using it causes several challenges. It is important to identify the life cycle model that you actually plan to use, to ensure that it satisfies the DO-178C objectives, and to follow the documented life cycle model. If the documented life cycle model is not what is needed, the plans should be updated accordingly, unless otherwise agreed with the certification authority.

    As mentioned earlier, the SDP documents the transition criteria for the software development. This includes the entry criteria and exit criteria for each phase of the development. There are many ways to document transition criteria. A table can be an effective and straightforward way to document such information. The table lists each phase, the activities performed in that phase, the criteria for entering the phase, and the criteria for exiting the phase. Appendix A provides an example transition criteria table for a development effort. It is important to keep in mind that DO-178C doesn’t dictate an ordering for the development activities, but the verification efforts will need to be top-down (i.e., verify requirements prior to verifying design prior to verifying code).

  3. Software development environment: In addition to identifying standards and describing the software life cycle, the SDP identifies the development environment. This includes the tools used to develop requirements, design, source code, and executable object code (e.g., compiler, linker, editor, loader, and hardware platform). If a full list of tools was identified in the PSAC, as suggested, the SDP may simply reference the PSAC. However, the SDP may go into more detail about how the tools are used in the software development. Many times, the PSAC and SDP do not provide the tool part numbers, since that information is included in the Software Life Cycle Environment

    Configuration Index (SLECI). To avoid redundancy, some projects create the SLECI early in the project and reference it from the SDP. If this is the case, an initial version of the SLECI should be completed with the plans, even though the SLECI will likely need to be updated later as the project matures.

    The environment identification provides a means to control it and to ensure that the software can be consistently reproduced. Uncontrolled tools can lead to problems during the implementation, integration, and verification phases. I witnessed one situation where different developers used different compiler versions and settings. When they integrated their modules, things got interesting.

    If any development tools require qualification, this should be explained in the SDP. The PSAC may have already provided a summary of qualification, but the SDP should explain how the tool is used in the overall life cycle and how the tool users will know the proper tool operation (such as a reference to Tool Operational Requirements or User’s Guide). See Chapter 13 for information on tool qualification.

Chapters 6 through 8 provide more information on software requirements, design, and implementation, respectively.

5.3.3 Software Verification Plan

The primary audience for the SVP is the team members who will perform the verification activities, including testing. The SVP is closely related to the SDP, since the verification effort includes the evaluation of data that were generated during the development phases. As mentioned earlier, the SDP often provides a high-level summary of the requirements, design, code, and integration verification (such as peer reviews). The SVP normally provides additional details on the reviews (including review process details, checklists, required participants, etc.). It is acceptable to include the review details in the SDP and use the SVP to focus on testing and analyses. Regardless of the packaging, it must be clear what plan is covering what activities.

Of all the plans, the SVP tends to vary the most depending on the software level. This is because most of the DO-178C level differences are in the verification objectives. Typically, the SVP explains how the team will satisfy the objectives in DO-178C Tables A-3 through A-7.

The SVP explains the verification team organization and composition, as well as how the required DO-178C independence is satisfied. Although it is not required, most projects have a separate verification team to perform the test development and execution. DO-178C identifies several verification objectives that require independence (they have filled circles [⚫] in the DO-178C Annex A tables). DO-178C verification independence doesn’t require a separate organization but it does require that one or more persons (or maybe a tool) who did not develop the data being verified perform the verification. Independence basically means that another set of eyes and brain (possibly accompanied by a tool) are used to examine the data for correctness, completeness, compliance to standards, etc. Chapter 9 explains more about verification independence.

DO-178C verification includes reviews, analyses, and tests. The SVP explains how reviews, analyses, and tests will be performed. Any checklists used to accompany the verification are also either included in the SVP or referenced from the SVP.

Many of the DO-178C objectives may be satisfied by review. Tables A-3, A-4, and most of A-5 tend to be met using a peer review process (which is discussed further in Chapter 6). Additionally, some of the Table A-7 objectives (such as objectives 1 and 2) are satisfied by review. The SVP explains the review process (including or referencing detailed review procedures), the transition criteria for reviews, and checklists and records used to record the reviews. Either the SVP or the standards normally include (or reference) checklists for reviewing the requirements, design, and code. The checklists are used by engineers to ensure they don’t overlook important criteria during the review. Checklists that are brief tend to be most effective; if they are too detailed, they are usually not fully utilized. To create a concise but comprehensive checklist, I recommend separating the checklist items and detailed guidance into separate columns. The checklist column is brief but the guidance column provides detailed information to ensure that the reviewers understand the intent of each checklist item. This approach is particularly useful for large teams, teams with new engineers, or teams using outsourced resources. It helps to set the bar for the reviews and ensure consistency. Checklists typically include items to ensure that required DO-178C objectives are evaluated (including traceability, accuracy, and consistency) and the standards are satisfied, but they are not limited to the DO-178C guidance.

DO-178C Table A-6 is typically satisfied by the development and execution of tests. The SVP explains the test approach; how normal and robustness tests will be developed; what environment will be used to execute the tests; how traceability will occur between requirements, verification cases, verification procedures; how verification results will be maintained; how pass/ fail criteria will be identified; and where test results will be documented. In many cases the SVP makes reference to a Software Verification Cases and Procedures document, which details the test plans, specific test cases and procedures, test equipment and setup, etc.

DO-178C Tables A-5 (objectives 6 and 7) and A-7 (objectives 3–8) are usually satisfied (at least partially) by performing analyses. Each planned analysis should be explained in the SVP. Typical analyses include traceability analyses (ensuring complete and accurate bidirectional traceability between system requirements, high-level software requirements, low-level software requirements, and test data), worst-case execution timing analysis, stack usage analysis, link analysis, load analysis, memory map analysis, structural coverage analysis, and requirements coverage analysis. These analyses are explained in Chapter 9. The SVP should identify the intended approach for each analysis and where the procedures and results will be documented.

Since tools are often used for the verification effort, the SVP lists those tools. If the PSAC list is extensive (as previously recommended), it may be possible to make a reference to the PSAC instead of repeating the same list in the SVP. However, the SVP usually provides more detail on how each tool will be used in the software verification and references instructions necessary to properly use the tools. As with the development tools, the SVP may reference the SLECI to identify the tool details (versions and part numbers), rather than including the information in the SVP itself. If this is the case, the SLECI should be completed with the plans and may require revision prior to beginning the formal verification process. The SLECI is explained in Chapter 10.

If an emulator or simulator will be used to verify the software, its use should be explained and justified in the SVP. In some cases, the emulator or simulator may need to be qualified. Chapter 9 discusses the emulation and simulation.

The SVP also identifies the transition criteria for verification activities. Appendix A includes an example of the entry criteria, activities, and exit criteria for the verification activities of a project.

If the software being developed and verified contains partitioning, the SVP should explain how the partitioning integrity will be verified (DO-178C Table A-4 objective 13). Partitioning is discussed in Chapter 21.

DO-178C section 11.3 also mentions that the SVP should discuss assumptions made about the compiler, linker, and loader correctness. If compiler optimization is used, it should be explained in the plans, since it may affect the ability to obtain structural coverage analysis or to perform source-to-object code analysis. Also, the SVP should explain how the accuracy of the linker will be verified. If a loader is used without an integrity check (such as a cyclic redundancy check [CRC]), then the loader functionality will need to be verified. If an integrity check is used, the SVP should explain the approach and justify the adequacy of the check (e.g., mathematically calculate the algorithm accuracy to ensure that the CRC is adequate for the data being protected).

Finally, the SVP should explain how reverification will be performed. If changes are made during the development, will everything be retested or will a regression analysis be performed and only the impacted and changed items retested? The SVP should explain the planned approach, the criteria that will be used, and where the decisions will be documented. Reverification should consider both the changed and impacted software.

If previously developed software (e.g., COTS software or reused software) is used, some reverification may be needed (e.g., if it is installed in a new environment or used in a different way). The SVP should explain this. If no reverification is needed, the SVP should justify why not.

Chapter 9 provides more details on verification.

5.3.4 Software Configuration Management Plan

The SCMP explains how to manage the configuration of the life cycle data throughout the software development and verification effort. Software configuration management (SCM) begins during the planning phase and continues throughout the entire software life cycle—all the way through deployment, maintenance, and retirement of the software.

DO-178C section 11.4 provides a summary of what the SCMP should include. The SCMP explains the SCM procedures, tools, and methods for developmental SCM (used by engineering prior to formal baseline or releases) and formal SCM, as well as the transition criteria for the SCM process.

DO-178C section 7.2 explains the SCM activities, which need to be detailed in the SCMP. A brief summary of what typically goes in the SCMP for each activity is included in the following:

  1. Configuration identification: The SCMP explains how each configuration item (including individual source code and test files) is uniquely identified. Unique identification typically includes document or data numbering and revisions or versions.

  2. Baselines and traceability: The SCMP explains the approach for establishing and identifying baselines. If engineering baselines will be established throughout the project, this should be explained. Likewise, the establishment of formal baselines should be explained, including certification and production baselines. The traceability between baselines is also detailed in the SCMP.

  3. Problem reporting: The problem reporting process is part of the SCM process and should be explained in the SCMP, including an explanation of when the problem reporting process will begin, the required contents of a problem report (PR), and the process for verifying and closing a PR. The problem reporting process is crucial to an effective change management process and should be well defined in the plans. Engineers should also be trained on the PR process. Oftentimes, the SCMP includes a PR form with a brief explanation of how to complete each field. Most PRs include a classification field (to categorize the severity of the PR) and a state field (to identify the state of the PR, such as open, in-work, verified, closed, or deferred). Problem reporting is discussed in Chapters 9 and 10.

    If any other process besides the PR process is used to gather issues or actions, it should also be explained in the SCMP. This may happen when companies have additional change request, deviation, and/or action item tracking processes on top of the problem reporting system.

  4. Change control: The SCMP explains how changes to life cycle data are controlled to ensure that change drivers are established and approved prior to changing a data item. This is closely related to the problem reporting process.

  5. Change review: The purpose of the change review process is to make sure that changes are planned, approved, documented, properly implemented, and closed. It is typically monitored by a change review board that approves change implementation and ensures the change has been verified prior to closure. The change review process is closely related to the problem reporting process.

  6. Configuration status accounting: Throughout the project, it is necessary to know the status of the effort. Configuration status accounting provides this capability. Most SCM tools offer the ability to generate status reports. The SCMP explains what to include in status reports, when status reports will be generated, how status reports will be used, and how the tools (if applicable) support this process. The status and classification of problem reports throughout the development is particularly pertinent to certification. DO-178C section 7.2.6 provides information on what to consider in status accounting reports.

  7. Archival, release, and retrieval: Many companies have detailed procedures for release, archival, and retrieval. In this case, the company procedures may be referenced in the SCMP, but it should be ensured that such company procedures adequately address the DO-178C guidance in sections 7.2.7 and 11.4.b.7 [1]. Following are the specific items to explain or reference in the SCMP:

    1. Archival: The SCMP explains how archiving is achieved. Typically, this includes an off-site archiving process, as well as a description of media type, storage, and refresh rate. The long-term reliability of the media and its readability should be considered.

    2. Release : The formal release process for released data is explained in the SCMP. Projects will also maintain data which are not released. The SCMP should indicate how such data will be stored.

    3. Retrieval: Additionally, the retrieval processes should be explained or referenced. The retrieval process considers long-term retrieval and media compatibility (e.g., the process may require that certain equipment be archived in order to retrieve data in the future).

  8. Software load control: The SCMP explains how software is accurately loaded into the target. If an integrity check is used, it should be detailed. If there is no integrity check, the approach for ensuring an accurate, complete, and uncorrupted load should be defined.

  9. Software life cycle environment control: The software life cycle environment identified in the SDP, SVP, and/or SLECI must be controlled. A controlled environment ensures that all team members are using the approved environment and ensures a repeatable process. The SCMP describes how the environment is controlled.

    Normally, this involves a released SLECI and an assessment to ensure that the tools listed in the SLECI are complete, accurate, and utilized. Oftentimes, a configuration audit (or conformity) is required prior to the formal software build and formal test execution steps. SQA may perform or witness the configuration audit.

  10. Software life cycle data control: The SCMP identifies all software life cycle data to be produced, along with CC1/CC2 categorization. This should also include how the specific project is implementing CC1 and CC2 for their data. DO-178C’s CC1/CC2 criteria defines the minimum SCM required, but many projects exceed the minimum, combine data items, or vary some other way from DO-178C’s suggestions. If the CC1/CC2 is identified in the PSAC for each configuration item, then the PSAC may be referenced, rather than repeating it in the SCMP. Frequently, the SCMP lists the minimal CC1/CC2 assignment from DO-178C, and the PSAC gives the project-specific assignment. This allows the use of a general, company-wide SCMP. CC1/CC2 is explained in Chapter 10.

If suppliers (including subcontractors or offshore resources) are used, the SCMP also explains the supplier’s SCM process. Many times the supplier has a separate SCMP, which is referenced. Or, the supplier may follow the customer’s SCMP. If multiple SCMPs are used, they should be reviewed for consistency and compatibility.

The plan for overseeing suppliers’ problem reporting processes should also be included in the SCMP (or in another plan referenced from the SCMP). Federal Aviation Administration (FAA) Order 8110.49 section 14-3.a states: “In order to ensure that software problems are consistently reported and resolved, and that software development assurance is accomplished before certification, the applicant should discuss in their Software Configuration Management Plan, or other appropriate planning documents, how they will oversee their supplier’s and sub-tier supplier’s software problem reporting process” [2]. The Order goes on to provide the FAA’s expectations for the SCMP, including an explanation of how supplier problems will be “reported, assessed, resolved, implemented, re-verified (regression testing and analysis), closed, and controlled” [2]. The European Aviation Safety Agency (EASA) identifies similar expectations in their Certification Memorandum CM-SWCEH-002 [3].

Finally, the SCMP identifies SCM-specific data generated, including PRs, Software Configuration Index (SCI), SLECI, and SCM records. All of these SCM data items are discussed in Chapter 10.

After reading dozens of SCMPs, I have identified several common deficiencies, which are listed here for your awareness:

  • The problem reporting process is often not explained in enough detail for it to be properly implemented by the development and verification teams. Additionally, it may not be clear when the problem reporting process will start.

  • The process for controlling the environment is rarely defined. As a consequence, many projects do not have adequate environment control.

  • The approach for archiving is not described, including how the environment (tools used to develop, verify, configure, and manage the software) will be archived.

  • The developmental SCM process that is used by engineering on a daily basis is rarely detailed.

  • Supplier control is often not adequately explained in the SCMP, and the various SCM processes between companies are rarely reviewed for consistency.

  • The plan for establishing developmental baselines is not elaborated.

  • SCM tools are frequently not identified or controlled.

Most companies have a company-wide SCMP. When this is the case, the project-specific details still need to be addressed somewhere. This could be in a separate project-specific SCMP that supplements the company-wide SCMP or in the PSAC (or some other logical document). Whatever, the case, it should be clear in the PSAC, as well as the SDP and SVP, so that the team members understand and follow the approved processes.

5.3.5 Software Quality Assurance Plan

The SQAP describes the software quality team’s plan for assuring that the software complies with the approved plans and standards, as well as the DO-178C objectives. The SQAP includes the organization of the SQA team within the overall company and emphasizes their independence.

The SQAP also explains the software quality engineer’s* role, which typically includes the following:

  • Reviewing the plans and standards

  • Participating in the peer review process to ensure that the peer review process is properly followed

  • Enforcing the transition criteria identified in the plans

  • Auditing the environment to ensure developers and verifiers are using the tools identified in the SLECI with the appropriate settings, including compiler, linker, test tools and equipment, etc.

  • Assessing compliance to plans

  • Witnessing software builds and tests

  • Signing/approving key documents

  • Closing PRs

  • Participating in the change control board

  • Performing the software conformity review

SQA is further discussed in Chapter 11. As software quality engineers carry out their responsibilities, they generate SQA records, which explain what they did and any discrepancies discovered. Oftentimes, a common form is used for multiple SQA activities and a blank form is included in the SQAP. The SQAP should explain the approach for SQA record keeping, including what records will be kept, where they will be stored, and how their configuration will be managed.

Many times, the SQAP identifies target percentages for quality’s involvement. This can include the percentage of peer review participation and test witnessing. If this is the case, it should be clear how the metrics will be collected.

The SQAP should explain the transition criteria for the SQA process (i.e., when SQA activities begin), as well as any key timing details of when SQA activities will be performed.

Many companies have implemented product quality assurance (PQA) in addition to SQA. The product quality engineer (PQE) oversees the dayto-day activities of the project and focuses on technical quality, as well as process compliance. If PQA is used to satisfy any of the DO-178C Table A-9 objectives, the SQAP should describe the PQA role and how it is coordinated with the SQA role.

If suppliers help with the software development or verification, the SQAP should explain how they will be monitored by SQA. If the supplier has their own SQA team and SQA plans, they should be evaluated for consistency with the higher-level SQAP.

Please note that if the SQAP is a company-wide plan, it should be ensured that it is consistent with the project-specific plans. Oftentimes, there are some project-specific needs not mentioned in the company-wide plan. If this is the case, a separate SQAP, the PSAC, or an SQAP addendum may be used to address the project-specific SQA needs.

5.4 Three Development Standards

In addition to the five plans, DO-178C identifies the need for three standards: requirements standard, design standard, and coding standard. Many companies new to DO-178C confuse these with industry-wide standards (e.g., Institute of Electrical and Electronic Engineers standards). Industrywide standards may serve as input to the project-specific standards, but each project has specific needs that are not dealt with in the industry-wide standards. If multiple projects use the same methods and languages, companywide standards may be generated and applied. However, the applicability of the company-wide standards must be evaluated for each project. The standards provide rules and constraints to help developers do their job properly and avoid activities that could have negative safety or functional impacts.

Many standards are ineffective for the following reasons:

  • The standards are just created to satisfy a check mark for DO-178C compliance and have little actual value to the project.

  • The standards are cut and pasted from industry-wide standards but do not meet the project-specific needs.

  • The standards are not read until the verification phases (the developers either don’t know about the standards or they ignore them).

This section is intended to provide some practical advice for developing effective standards.

Although it’s not required, most standards include both mandatory and advisory material. The mandatory items are generally called rules and the advisory items are called guidelines. In many cases, the applicability of the rules and guidelines varies depending on the software level (e.g., one item may be advisory for level C but mandatory for levels A and B). It should be noted that DO-178C doesn’t require standards for level D, although they are not prohibited.

Before jumping into the standards, I want to emphasize their importance. They serve as the instructions to the developers to implement safe and effective practices. If they are written clearly and include rationale and good examples, they can provide an excellent resource to the developers. Once the standards are developed, it is important to provide training to all developers. The training should cover the content of the standards, explain how to use the standards, and emphasize that reading and following the standards is mandatory.

5.4.1 Software Requirements Standards

The software requirements standards define methods, tools, rules, and constraints for the software requirements development [1]. They typically apply to the high-level requirements; however, some projects also apply the requirements standards to the low-level requirements. In general, the requirements standards are a guide for the team writing the requirements. For instance, they explain how to write effective and implementable requirements, use the requirements management tool, perform traceability, handle derived requirements, and create requirements that meet the DO-178C criteria. Requirements standards may also serve as a training tool for engineers in addition to providing success criteria for the requirements review.

Following is a list of items normally included in requirements standards:

  • The criteria from DO-178C Table A-3 (in order to proactively address the DO-178C expectations).

  • Definitions and examples of high-level requirements, low-level requirements, and derived requirements (for reference purposes).

  • Quality attributes of requirements (verifiable, unambiguous, consistent, etc.). Chapters 2 (Section 2.2.3) and 6 (Section 6.7.6.9) explain characteristics of good requirements.

  • Traceability approach and instructions.

  • Criteria for using the requirements management tool (including an explanation of each attribute and guidance on mandatory versus optional information).

  • Criteria to identify requirements (such as numbering scheme or prohibition to reuse a number).

  • If tables are used to represent requirements, explanation of how to properly use and identify them (e.g., numbering each row or column).

  • If graphics are used to represent or supplement the requirements, description of how to use each type of graphic and any symbols. Additionally, it may be necessary to specify a way to identify and trace each block or symbol. For information on model-based development, see Chapter 14.

  • Criteria to distinguish requirements from explanatory material.

  • Criteria to document derived requirements (including rationale for the derived requirements to assist the safety personnel in their safety assessment).

  • Constraints or limitations on any tools being used.

  • Criteria to develop robust requirements.

  • Criteria to handle tolerances within the requirements.

  • Criteria to use interface control documents and to document requirements that reference them.

  • Examples of applied rules and guidelines.

Chapter 6 discusses software requirements. Several of the concepts discussed in Chapter 6 may be suitable to include in the requirements standards.

5.4.2 Software Design Standards

The software design standards define methods, tools, rules, and constraints for developing the software design [1]. In DO-178C, the design includes the low-level requirements and the software architecture.

The design standards are a guide for the team developing the design. The standards explain how to write effective and implementable design, use the design tools, perform traceability, handle derived low-level requirements, and create design data that meets the DO-178C criteria. Design standards may also serve as a training tool for engineers in addition to providing criteria for the design review.

Because design can be presented using different approaches, it is challenging to develop a generic design standard. Many companies have generic design standards, but they are often ineffective for the project-specific needs. Each project should determine their desired methodology and provide instructions to the designers to use the methodology properly. It may be possible to use the company-wide standards as a starting point, but often some kind of tailoring is needed. If the tailoring is minor, it may be feasible to discuss it in the SDP, rather than updating the standard.

Following is a list of items normally included in design standards:

  • The criteria from DO-178C Table A-4 (in order to proactively address DO-178C objectives).

  • Preferred layout for the design document.

  • Criteria for low-level requirements (low-level requirements will have the same quality attributes as high-level requirements; however, low-level requirements are design and will describe how, rather than what).

  • Traceabilityapproachbetweenhigh-levelandlow-levelrequirements.

  • Criteria to document derived low-level requirements and their rationale.

  • Guidelines for effective architecture, which may include block diagrams, structure charts, state transition diagrams, control and data flow diagrams, flowcharts, call trees, entity-relationship diagrams, etc.

  • Naming conventions for modules, which should be consistent with what will be implemented in code.

  • Design constraints (e.g., limited level of nested conditions, or prohibition of recursive functions, unconditional branches, reentrant interrupt service routines, and self-modifying instructions).

  • Guidelines for robust design.

  • Instructions for how to document deactivated code in the design (Chapter 17 provides information on deactivated code).

Chapter 7 provides recommendations for good software design. Several of the concepts discussed in Chapter 7 may be suitable to include in the design standards.

5.4.3 Software Coding Standards

Like the requirements and design standards, the coding standards are used to provide guidelines to the coders. Coding standards explain how to use the specific language properly, constrain some constructs of the language that might not be advisable to use in safety-critical domain, identify naming conventions, explain the use of global data, and develop readable and maintainable code.

Coding standards are relatively commonplace in software development and there are some beneficial industry-wide resources to use when developing coding standards. The Motor Industry Software Reliability Association’s C standard (MISRA-C), for example, is an excellent standard to consider as input to C coding standards.

Coding standards are language-specific. If a project uses multiple languages, each language needs to be discussed in the standards. Even assembly language should have guidelines for usage.

Following is a list of items normally included in coding standards:

  • The criteria from DO-178C Table A-5 (to proactively address the objectives).

  • Approach to document traceability between the code and low-level requirements.

  • Module and function or procedure naming conventions.

  • Guidelines for using local and global data.

  • Guidelines for code readability and maintainability (e.g., use comments and white space, apply abstraction, limit file size, and limit depth of nested conditions).

  • Instructions for module structure (including header format and module sections).

  • Guidelines for function design (e.g., header format, function layout, unique function identification/name, required nonrecursive and nonreentrant functions, and entry and exit rules).

  • Constraints for conditionally compiled code.

  • Guidelines for the use of macros.

  • Other constraints (e.g., limit or prohibit use of pointers, prohibit reentrant and recursive code).

It is useful to include rationale and examples for each of the guidelines identified in the coding standards. When a coder understands why something is desired or prohibited, it is easier for him or her to apply the guidance.

Chapter 8 provides recommendations for writing safety-critical code. Several of the concepts discussed in Chapter 8 may be suitable to include in the coding standards.

5.5 Tool Qualification Planning

If a tool needs to be qualified, that requires some planning. The planning information may be included in the PSAC; however, for tools with higher tool qualification levels or for tools that will be reused on other projects, additional tool plans may be needed. Tool qualification is discussed in Chapter 13. For now, just be aware that tool qualification requires some special planning.

5.6 Other Plans

In addition to the five plans identified in DO-178C, projects may have some other plans that help with the project management. These may be less formal but are still important to the overall project management. Three such plans are discussed in this section.

5.6.1 Project Management Plan

This plan identifies team members and responsibilities (including names and specific assignments), detailed schedule, status of activities, metrics to gather, etc. It goes into more detail than the SDP and SVP on project organizational details and provides a way to ensure that everything is properly managed.

5.6.2 Requirements Management Plan

This plan is sometimes developed in addition to the SDP and software requirements standards to ensure proper documentation, allocation, and tracing of requirements. It may also help ensure consistent requirements documentation between the systems, hardware, safety, and software teams.

5.6.3 Test Plan

This plan explains the details of the test strategy, including equipment needed, procedures, tools, test case layout, tracing strategy, etc. It may be part of the Software Verification Cases and Procedures document or it may be a separate stand-alone plan. The test plan often includes the testreadiness review checklists and criteria that the team will use to ensure they are ready to begin formal test execution. Test planning is discussed in Chapter 9.

References

1. RTCA DO-178C, Software Considerations in Airborne Systems and Equipment Certification (Washington, DC: RTCA, Inc., December 2011).

2. Federal Aviation Administration, Software Approval Guidelines, Order 8110.49 (Change 1, September 2011).

3. European Aviation Safety Agency, Software Aspects of Certification, Certification Memorandum CM-SWCEH-002 (Issue 1, August 2011).

*Chapter 4 provides an overview of the technology supplements. Chapters 14 through 16 provide more details on each of the technology supplements.

The PSAC is the top-level software plan that is submitted to the certification authority. It is discussed later in this chapter.

*Designees are representatives of the certification authority or delegated organization who are authorized to review data and either approve or recommend approval. Projects typically coordinate with designees prior to interaction with the certification authority.

*The SAS is the compliance report for the DO-178C compliant software and is submitted to the certification authority. It is completed at the end of the software project and identifies deviations from the plans. The SAS is discussed in Chapter 12.

*Big bang is a hypothetical model where everything magically appears at the end of the project. Tornado and smoke-and-mirrors models are Leanna-isms. I use the tornado model to describe a project headed for disaster and the smoke-and-mirrors model to describe a project that really has no plan.

*Most software projects have one SQA engineer assigned. However, large or dispersed projects may have multiple quality engineers.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.200.71