Contents

Part I Introduction

1 Introduction and Overview

Acronyms

1.1 Defining Safety-Critical Software

1.2 Importance of Safety Focus

1.3 Book Purpose and Important Caveats

1.4 Book Overview

References

Part II Context of Safety-Critical Software Development

2 Software in the Context of the System

Acronyms

2.1 Overview of System Development

2.2 System Requirements

2.2.1 Importance of System Requirements

2.2.2 Types of System Requirements

2.2.3 Characteristics of Good Requirements

2.2.4 System Requirements Considerations

2.2.4.1 Integrity and Availability Considerations

2.2.4.2 Other System Requirements Considerations

2.2.5 Requirements Assumptions

2.2.6 Allocation to Items

2.3 System Requirements Validation and Verification

2.3.1 Requirements Validation

2.3.2 Implementation Verification

2.3.3 Validation and Verification Recommendations

2.4 Best Practices for Systems Engineers

2.5 Software’s Relationship to the System

References

3 Software in the Context of the System Safety Assessment

Acronyms

3.1 Overview of the Aircraft and System Safety Assessment Process

3.1.1 Safety Program Plan

3.1.2 Functional Hazard Assessment

3.1.3 System Functional Hazard Assessment

3.1.4 Preliminary Aircraft Safety Assessment

3.1.5 Preliminary System Safety Assessment

3.1.6 Common Cause Analysis

3.1.7 Aircraft and System Safety Assessments

3.2 Development Assurance

3.2.1 Development Assurance Levels

3.3 How Does Software Fit into the Safety Process?

3.3.1 Software’s Uniqueness

3.3.2 Software Development Assurance

3.3.3 Other Views

3.3.4 Some Suggestions for Addressing Software in the System Safety Process

References

Part III Developing Safety-Critical Software Using DO-178C

4 Overview of DO-178C and Supporting Documents

Acronyms

4.1 History of DO-178

4.2 DO-178C and DO-278A Core Documents

4.2.1 DO-278A and DO-178C Differences

4.2.2 Overview of the DO-178C Annex A Objectives Tables

4.3 DO-330: Software Tool Qualification Considerations

4.4 DO-178C Technology Supplements

4.4.1 DO-331: Model-Based Development Supplement

4.4.2 DO-332: Object-Oriented Technology Supplement

4.4.3 DO-333: Formal Methods Supplement

4.5 DO-248C: Supporting Material

References

5 Software Planning

Acronyms

5.1 Introduction

5.2 General Planning Recommendations

5.3 Five Software Plans

5.3.1 Plan for Software Aspects of Certification

5.3.2 Software Development Plan

5.3.3 Software Verification Plan

5.3.4 Software Configuration Management Plan

5.3.5 Software Quality Assurance Plan

5.4 Three Development Standards

5.4.1 Software Requirements Standards

5.4.2 Software Design Standards

5.4.3 Software Coding Standards

5.5 Tool Qualification Planning

5.6 Other Plans

5.6.1 Project Management Plan

5.6.2 Requirements Management Plan

5.6.3 Test Plan

References

6 Software Requirements

Acronyms

6.1 Introduction

6.2 Defining Requirement

6.3 Importance of Good Software Requirements

6.3.1 Reason 1: Requirements Are the Foundation for the Software Development

6.3.2 Reason 2: Good Requirements Save Time and Money

6.3.3 Reason 3: Good Requirements Are Essential to Safety

6.3.4 Reason 4: Good Requirements Are Necessary to Meet the Customer Needs

6.3.5 Reason 5: Good Requirements Are Important for Testing

6.4 The Software Requirements Engineer

6.5 Overview of Software Requirements Development

6.6 Gathering and Analyzing Input to the Software Requirements

6.6.1 Requirements Gathering Activities

6.6.2 Requirements Analyzing Activities

6.7 Writing the Software Requirements

6.7.1 Task 1: Determine the Methodology

6.7.2 Task 2: Determine the Software Requirements Document Layout

6.7.3 Task 3: Divide Software Functionality into Subsystems and/or Features

6.7.4 Task 4: Determine Requirements Priorities

6.7.5 A Brief Detour (Not a Task): Slippery Slopes to Avoid

6.7.5.1 Slippery Slope #1: Going to Design Too Quickly

6.7.5.2 Slippery Slope #2: One Level of Requirements

6.7.5.3 Slippery Slope #3: Going Straight to Code

6.7.6 Task 5: Document the Requirements

6.7.6.1 Document Functional Requirements

6.7.6.2 Document Nonfunctional Requirements

6.7.6.3 Document Interfaces

6.7.6.4 Uniquely Identify Each Requirement

6.7.6.5 Document Rationale

6.7.6.6 Trace Requirements to Their Source

6.7.6.7 Identify Uncertainties and Assumptions

6.7.6.8 Start a Data Dictionary

6.7.6.9 Implement Characteristics of Good Requirements

6.7.7 Task 6: Provide Feedback on the System Requirements

6.8 Verifying (Reviewing) Requirements

6.8.1 Peer Review Recommended Practices

6.9 Managing Requirements

6.9.1 Basics of Requirements Management

6.9.2 Requirements Management Tools

6.10 Requirements Prototyping

6.11 Traceability

6.11.1 Importance and Benefits of Traceability

6.11.2 Bidirectional Traceability

6.11.3 DO-178C and Traceability

6.11.4 Traceability Challenges

References

Recommended Readings

7 Software Design

Acronyms

7.1 Overview of Software Design

7.1.1 Software Architecture

7.1.2 Software Low-Level Requirements

7.1.3 Design Packaging

7.2 Approaches to Design

7.2.1 Structure-Based Design (Traditional)

7.2.2 Object-Oriented Design

7.3 Characteristics of Good Design

7.4 Design Verification

References

8 Software Implementation: Coding and Integration

Acronyms

8.1 Introduction

8.2 Coding

8.2.1 Overview of DO-178C Coding Guidance

8.2.2 Languages Used in Safety-Critical Software

8.2.2.1 Assembly Language

8.2.2.2 Ada

8.2.2.3 C

8.2.3 Choosing a Language and Compiler

8.2.4 General Recommendations for Programming

8.2.5 Special Code-Related Topics

8.2.5.1 Coding Standards

8.2.5.2 Compiler-Supplied Libraries

8.2.5.3 Autocode Generators

8.3 Verifying the Source Code

8.4 Development Integration

8.4.1 Build Process

8.4.2 Load Process

8.5 Verifying the Development Integration

References

Recommended Reading

9 Software Verification

Acronyms

9.1 Introduction

9.2 Importance of Verification

9.3 Independence and Verification

9.4 Reviews

9.4.1 Software Planning Review

9.4.2 Software Requirements, Design, and Code Reviews

9.4.3 Test Data Reviews

9.4.4 Review of Other Data Items

9.5 Analyses

9.5.1 Worst-Case Execution Time Analysis

9.5.2 Memory Margin Analysis

9.5.3 Link and Memory Map Analysis

9.5.4 Load Analysis

9.5.5 Interrupt Analysis

9.5.6 Math Analysis

9.5.7 Errors and Warnings Analysis

9.5.8 Partitioning Analysis

9.6 Software Testing

9.6.1 Purpose of Software Testing

9.6.2 Overview of DO-178C’s Software Testing Guidance

9.6.2.1 Requirements-Based Test Methods

9.6.2.2 Normal and Robustness Tests

9.6.3 Survey of Testing Strategies

9.6.3.1 Equivalence Class Partitioning

9.6.3.2 Boundary Value Testing

9.6.3.3 State Transition Testing

9.6.3.4 Decision Table Testing

9.6.3.5 Integration Testing

9.6.3.6 Performance Testing

9.6.3.7 Other Strategies

9.6.3.8 Complexity Measurements

9.6.3.9 Summary and Characteristics of a Good Test

9.6.4 Test Planning

9.6.5 Test Development

9.6.5.1 Test Cases

9.6.5.2 Test Procedures

9.6.5.3 DO-178C Requirements

9.6.5.4 Low-Level Requirements Testing versus Unit Testing

9.6.5.5 Handling Requirements That Cannot Be Tested

9.6.5.6 Obtaining Credit for Multiple Levels of Testing

9.6.5.7 Testing Additional Levels of Requirements

9.6.6 Test Execution

9.6.6.1 Performing Dry Runs

9.6.6.2 Reviewing Test Cases and Procedures

9.6.6.3 Using Target Computer versus Emulator or Simulator

9.6.6.4 Documenting the Verification Environment

9.6.6.5 Test Readiness Review

9.6.6.6 Running Tests for Certification Credit

9.6.7 Test Reporting

9.6.8 Test Traceability

9.6.9 Regression Testing

9.6.10 Testability

9.6.11 Automation in the Verification Processes

9.7 Verification of Verification

9.7.1 Review of Test Procedures

9.7.2 Review of Test Results

9.7.3 Requirements Coverage Analysis

9.7.4 Structural Coverage Analysis

9.7.4.1 Statement Coverage (DO-178C Table A-7 Objective 7)

9.7.4.2 Decision Coverage (DO-178C Table A-7 Objective 6)

9.7.4.3 Modified Condition/Decision Coverage (DO-178C Table A-7 Objective 5)

9.7.4.4 Additional Code Verification (DO-178C Table A-7 Objective 9)

9.7.4.5 Data Coupling and Control Coupling Analyses (DO-178C Table A-7 Objective 8)

9.7.4.6 Addressing Structural Coverage Gaps

9.7.4.7 Final Thoughts on Structural Coverage Analysis

9.8 Problem Reporting

9.9 Recommendations for the Verification Processes

References

Recommended Readings

10 Software Configuration Management

Acronyms

10.1 Introduction

10.1.2 Why Is Software Configuration Management Needed?

10.1.3 Who Is Responsible for Implementing Software Configuration Management?

10.1.4 What Does Software Configuration Management Involve?

10.2 SCM Activities

10.2.1 Configuration Identification

10.2.2 Baselines

10.2.3 Traceability

10.2.4 Problem Reporting

10.2.4.1 Problem Report Management with Multiple Stakeholders

10.2.4.2 Managing Open/Deferred Problem Reports

10.2.5 Change Control and Review

10.2.6 Configuration Status Accounting

10.2.7 Release

10.2.8 Archival and Retrieval

10.2.9 Data Control Categories

10.2.10 Load Control

10.2.11 Software Life Cycle Environment Control

10.3 Special SCM Skills

10.4 SCM Data

10.4.1 SCM Plan

10.4.2 Problem Reports

10.4.3 Software Life Cycle Environment Configuration Index

10.4.4 Software Configuration Index

10.4.5 SCM Records

10.5 SCM Pitfalls

10.6 Change Impact Analysis

References

11 Software Quality Assurance

Acronyms

11.1 Introduction: Software Quality and Software Quality Assurance (SQA)

11.1.1 Defining Software Quality

11.1.2 Characteristics of High-Quality Software

11.1.3 Software Quality Assurance

11.1.4 Examples of Common Quality Process and Product Issues

11.2 Characteristics of Effective and Ineffective SQA

11.2.1 Effective SQA

11.2.2 Ineffective SQA

11.3 SQA Activities

References

12 Certification Liaison

Acronyms

12.1 What Is Certification Liaison?

12.2 Communicating with the Certification Authorities

12.2.1 Best Practices for Coordinating with Certification Authorities

12.3 Software Accomplishment Summary

12.4 Stage of Involvement (SOI) Audits

12.4.1 Overview of SOI Audits

12.4.2 Overview of the Software Job Aid

12.4.3 Using the Software Job Aid

12.4.4 General Recommendations for the Auditor

12.4.5 General Recommendations for the Auditee (the Applicant/Developer)

12.4.6 SOI Review Specifics

12.4.6.1 SOI 1 Entry Criteria, Expectations, and Preparation Recommendations

12.4.6.2 SOI 2 Entry Criteria, Expectations, and Preparation Recommendations

12.4.6.3 SOI 3 Entry Criteria, Expectations, and Preparation Recommendations

12.4.6.4 SOI 4 Entry Criteria, Expectations, and Preparation Recommendations

12.5 Software Maturity Prior to Certification Flight Tests

References

Part IV Tool Qualification and DO-178C Supplements

13 DO-330 and Software Tool Qualification

Acronyms

13.1 Introduction

13.2 Determining Tool Qualification Need and Level (DO-178C Section 12.2)

13.3 Qualifying a Tool (DO-330 Overview)

13.3.1 Need for DO-330

13.3.2 DO-330 Tool Qualification Process

13.4 Special Tool Qualification Topics

13.4.1 FAA Order 8110.49

13.4.2 Tool Determinism

13.4.3 Additional Tool Qualification Considerations

13.4.4 Tool Qualification Pitfalls

13.4.5 DO-330 and DO-178C Supplements

13.4.6 Using DO-330 for Other Domains

References

14 DO-331 and Model-Based Development and Verification

Acronyms

14.1 Introduction

14.2 Potential Benefits of Model-Based Development and Verification

14.3 Potential Risks of Model-Based Development and Verification

14.4 Overview of DO-331

14.5 Certification Authorities Recognition of DO-331

References

15 DO-332 and Object-Oriented Technology and Related Techniques

Acronyms

15.1 Introduction to Object-Oriented Technology

15.2 Use of OOT in Aviation

15.3 OOT in Aviation Handbook

15.4 FAA-Sponsored Research on OOT and Structural Coverage

15.5 DO-332 Overview

15.5.1 Planning

15.5.2 Development

15.5.3 Verification

15.5.4 Vulnerabilities

15.5.5 Type Safety

15.5.6 Related Techniques

15.5.7 Frequently Asked Questions

15.6 OOT Recommendations

15.7 Conclusion

References

Recommended Readings

16 DO-333 and Formal Methods

Acronyms

16.1 Introduction to Formal Methods

16.2 What Are Formal Methods?

16.3 Potential Benefits of Formal Methods

16.4 Challenges of Formal Methods

16.5 DO-333 Overview

16.5.1 Purpose of DO-333

16.5.2 DO-333 and DO-178C Compared

16.5.2.1 Planning and Development

16.5.2.2 Configuration Management, Quality Assurance, and Certification Liaison

16.5.2.3 Verification

16.6 Other Resources

References

Part V Special Topics

17 Noncovered Code (Dead, Extraneous, and Deactivated Code)

Acronyms

17.1 Introduction

17.2 Extraneous and Dead Code

17.2.1 Avoiding Late Discoveries of Extraneous and Dead Code

17.2.2 Evaluating Extraneous or Dead Code

17.3 Deactivated Code

17.3.1 Planning

17.3.2 Development

17.3.3 Verification

References

18 Field-Loadable Software

Acronyms

18.1 Introduction

18.2 What Is Field-Loadable Software?

18.3 Benefits of Field-Loadable Software

18.4 Challenges of Field-Loadable Software

18.5 Developing and Loading Field-Loadable Software

18.5.1 Developing the System to Be Field-Loadable

18.5.2 Developing the Field-Loadable Software

18.5.3 Loading the Field-Loadable Software

18.5.4 Modifying the Field-Loadable Software

18.6 Summary

References

19 User-Modifiable Software

Acronyms

19.1 Introduction

19.2 What Is User-Modifiable Software?

19.3 Examples of UMS

19.4 Designing the System for UMS

19.5 Modifying and Maintaining UMS

References

20 Real-Time Operating Systems

Acronyms

20.1 Introduction

20.2 What Is an RTOS?

20.3 Why Use an RTOS?

20.4 RTOS Kernel and Its Supporting Software

20.4.1 RTOS Kernel

20.4.2 Application Program Interface

20.4.3 Board Support Package

20.4.4 Device Driver

20.4.5 Support Libraries

20.5 Characteristics of an RTOS Used in Safety-Critical Systems

20.5.1 Deterministic

20.5.2 Reliable Performance

20.5.3 Compatible with the Hardware

20.5.4 Compatible with the Environment

20.5.5 Fault Tolerant

20.5.6 Health Monitoring

20.5.7 Certifiable

20.5.8 Maintainable

20.5.9 Reusable

20.6 Features of an RTOS Used in Safety-Critical Systems

20.6.1 Multitasking

20.6.2 Guaranteed and Deterministic Schedulability

20.6.2.1 Scheduling between Partitions

20.6.2.2 Scheduling within Partitions

20.6.3 Deterministic Intertask Communication

20.6.4 Reliable Memory Management

20.6.5 Interrupt Processing

20.6.6 Hook Functions

20.6.7 Robustness Checking

20.6.8 File System

20.6.9 Robust Partitioning

20.7 RTOS Issues to Consider

20.7.1 Technical Issues to Consider

20.7.1.1 Resource Contention

20.7.1.2 Priority Inversion

20.7.1.3 Memory Leaks

20.7.1.4 Memory Fragmentation

20.7.1.5 Intertask Interference

20.7.1.6 Jitter

20.7.1.7 Vulnerabilities

20.7.2 Certification Issues to Consider

20.7.2.1 Creating a Safe Subset

20.7.2.2 User’s Manual

20.7.2.3 Reverse Engineering

20.7.2.4 Deactivated Features

20.7.2.5 Complexity

20.7.2.6 Disconnect with the System

20.7.2.7 Code Compliance Issues

20.7.2.8 Error Handling Issues

20.7.2.9 Problem Reporting

20.7.2.10 Partitioning Analysis

20.7.2.11 Other Supporting Software

20.7.2.12 Target Testing

20.7.2.13 Modifications

20.8 Other RTOS-Related Topics

20.8.1 ARINC 653 Overview

20.8.2 Tool Support

20.8.3 Open Source RTOSs

20.8.4 Multicore Processors, Virtualization, and Hypervisors

20.8.5 Security

20.8.6 RTOS Selection Questions

References

21 Software Partitioning

Acronyms

21.1 Introduction to Partitioning

21.1.1 Partitioning: A Subset of Protection

21.1.2 DO-178C and Partitioning

21.1.3 Robust Partitioning

21.2 Shared Memory (Spatial Partitioning)

21.3 Shared Central Processing Unit (Temporal Partitioning)

21.4 Shared Input/Output

21.5 Some Partitioning-Related Challenges

21.5.1 Direct Memory Access

21.5.2 Cache Memory

21.5.3 Interrupts

21.5.4 Interpartition Communication

21.6 Recommendations for Partitioning

References

22 Configuration Data

Acronyms

22.1 Introduction

22.2 Terminology and Examples

22.3 Summary of DO-178C Guidance on Parameter Data

22.4 Recommendations

References

23 Aeronautical Data

Acronyms

23.1 Introduction

23.2 DO-200A: Standards for Processing Aeronautical Data

23.3 FAA Advisory Circular 20-153A

23.4 Tools Used for Processing Aeronautical Data

23.5 Other Industry Documents Related to Aeronautical Data

23.5.1 DO-201A: Standards for Aeronautical Information

23.5.2 DO-236B: Minimum Aviation System Performance Standards: Required Navigation Performance for Area Navigation

23.5.3 DO-272C: User Requirements for Aerodrome Mapping Information

23.5.4 DO-276A: User Requirements for Terrain and Obstacle Data

23.5.5 DO-291B: Interchange Standards for Terrain, Obstacle, and Aerodrome Mapping Data

23.5.6 ARINC 424: Standard, Navigation System Database

23.5.7 ARINC 816-1: Embedded Interchange Format for Airport Mapping Database

References

24 Software Reuse

Acronyms

24.1 Introduction

24.2 Designing Reusable Components

24.3 Reusing Previously Developed Software

24.3.1 Evaluating PDS for Use in Civil Aviation Products

24.3.2 Reusing PDS That Was Not Developed Using DO-178[ ]

24.3.3 Additional Thoughts on COTS Software

24.4 Product Service History

24.4.1 Definition of Product Service History

24.4.2 Difficulties in Seeking Credit Using Product Service History

24.4.3 Factors to Consider When Claiming Credit Using Product Service History

References

25 Reverse Engineering

Acronyms

25.1 What Is Reverse Engineering?

25.2 Examples of Reverse Engineering

25.3 Issues to Be Addressed When Reverse Engineering

25.4 Recommendations for Reverse Engineering

References

26 Outsourcing and Offshoring Software Life Cycle Activities

Acronyms

26.1 Introduction

26.2 Reasons for Outsourcing

26.3 Challenges and Risks in Outsourcing

26.4 Recommendations to Overcome the Challenges and Risks

26.5 Summary

Appendix A: Example Transition Criteria

Appendix B: Real-Time Operating System Areas of Concern

Appendix C: Questions to Consider When Selecting a Real-Time Operating System for a Safety-Critical System

Appendix D: Software Service History Questions

Index

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.140.204