8 Software Implementation: Coding and Integration

Acronym

ACG autocode generator
ANSI American National Standards Institute
ARM Ada Reference Manual
CAST Certification Authorities Software Team
CPU central processing unit
CRC cyclic redundancy check
DoD Department of Defense
FAQ frequently asked question
I/O input/output
IEC International Electrical Technical Commission
ISO International Standards Organization
LRM Language Reference Manual
MISRA Motor Industry Software Reliability Association
PPP Pseudocode Programming Process
PSAC Plan for Software Aspects of Certification
RTOS real-time operating system
SCI Software Configuration Index
SLECI Software Life Cycle Environment Configuration Index
WORA write once, run anywhere

8.1 Introduction

This chapter examines two aspects of software implementation: (1) coding and (2) integration. Additionally, the verification of the coding and integration processes is discussed.

Coding is the process of developing the source code from the design description. The terms construction or programming are often preferred because they carry the connotation that coding is not just a mechanical step, but is one that requires forethought and skill—like constructing a building or bridge. However, DO-178C uses the term coding throughout. DO-178C follows the philosophy that most of the construction activity is part of design, rather than coding. DO-178C does not, however, prohibit the designer from also writing the code. Regardless of who determines the code construction details, the importance of the source code development should not be undermined. Requirements and design are critical, but the compiled and linked code is what actually flies.

Integration is the process of building the executable object code (using a compiler and linker) and loading it into the target computer (using a loader). Even though the integration process is sometimes considered trivial, it is a vitally important process in the development of safety-critical software.

8.2 Coding

Since the coding process generates the source code that will be converted to the executable image that actually ends up in the safety-critical system, it is an exceptionally important step in the software development process. This section covers the DO-178C guidance for coding, common languages used to develop safety-critical software, and recommendations for programming in the safety-critical domain. The C, Ada, and assembly languages are briefly explored, because they are the most commonly used languages in embedded safety-critical systems. Some other languages are briefly mentioned but are not detailed. This section ends by discussing a couple of code-related special topics: libraries and autocode generators (ACGs).

Please note that the subject of parameter or configuration data is closely related to coding; however, it is discussed in Chapter 22, rather than in this chapter.

8.2.1 Overview of DO-178C Coding Guidance

DO-178C provides guidance on the planning, development, and verification of source code.

First, during the planning phase the developer identifies the specific programming language, coding standards, and compiler (per DO-178C section 4.4.2). Chapter 5 discusses the overall planning process and the coding standards. This chapter provides some coding recommendations that should be considered in the company-specific coding standards during the planning phase.

Second, DO-178C provides guidance on the code development (DO-178C section 5.3). DO-178C explains that the primary objective of the coding process is developing source code that is “traceable, verifiable, consistent, and correctly implements low-level requirements” [1]. To accomplish this objective, the source code must implement the design description and only the design description (including the low-level requirements and the architecture), trace to the low-level requirements, and conform to the identified coding standards. The outputs of the coding phase are source code and trace data. The build instructions (including compile and link data) and load instructions are also developed during the coding phase but are treated as part of integration, which is discussed in Section 8.4.

Third, the source code is verified to ensure its accuracy, consistency, compliance to design, compliance to standards, etc. DO-178C Table A-5 summarizes the objectives for verifying the source code, which are discussed in Section 8.3.

8.2.2 Languages Used in Safety-Critical Software

At this time, there are three primary languages used in airborne safetycritical systems: C, Ada, and assembly. There are some legacy products out there with other languages (including FORTRAN and Pascal). C++ has been used in some projects but is typically severely limited so that it is really like C. Java and C# have been used in several tool development efforts and work quite well for that; however, to date they are still not ready to use when implementing safety-critical systems. Other languages have been used or proposed. Because C, Ada, and assembly are the most predominant, they are briefly described in the following sections. Table 8.1 summarizes other languages as well. It was tempting to provide a more detailed discussion of the languages; however, there are other resources that describe them well.

8.2.2.1 Assembly Language

Assembly language is a low-level programming language used for computers, microprocessors, and microcontrollers. It uses a symbolic representation of the machine code to program a given central processing unit (CPU). The language is typically defined by the CPU manufacturer. Therefore, unlike high-level languages, assembly is not portable across platforms; however, it is often portable within a processor family. A utility program known as an assembler is used to translate the assembly statements into the machine code for the target computer. The assembler performs a one-to-one mapping from the assembly instructions and data to the machine instructions and data.

There are two types of assemblers: one pass and two pass. A one-pass assembler goes through the source code once and assumes that all symbols will be defined before any instruction that references them. The one-pass assembler has speed advantages. Two-pass assemblers create a table with all symbols and their values in the first pass, and then they use the table in a second pass to generate code. The assembler must at least be able to determine the length of each instruction on the first pass so that the addresses of symbols can be calculated. The advantage of the two-pass assembler is that symbols can be defined anywhere in the program source code. This allows the programs to be defined in more logical and meaningful ways and makes two-pass assembler programs easier to read and maintain [2].

Table 8.1 Languages Occasionally Used in Aviation Software

Language Description
C++ C++ was introduced in 1979 by Bjarne Stroustrup at Bell Labs as an enhancement to C. C++ was originally named C with Classes and was later renamed to C++ in 1983. It has both high-level and low-level language features. C++ is a statically typed, free-form, multiparadigm, general-purpose, high-level programming language. It is used for applications, device drivers, embedded software, high-performance server and client applications, and hardware design. C++ adds the following object-oriented enhancements to C: classes, virtual functions, multiple inheritance, operator overloading, exception handling, and templates. Most C++ compilers will also compile C. C++ (and a subset Embedded C++) has been used in several safety-critical systems, but many of the object-oriented features were not used.
C# C# was introduced in 2001 by a team led by Anders Hejlsberg at Microsoft. C# was initially designed to be a general-purpose, high-level programming language that was simple, modern, and object-oriented. Characteristics of C# include strongly typed, imperative, functional, declarative, generic, objectoriented, and component-oriented. C# has been used for software tools used in aviation but is not yet considered mature enough for safety-critical software.
FORTRAN FORTRAN was introduced in 1957 by John Backus at IBM. FORTRAN is a general-purpose, high-level programming language that is designed for numeric computation and scientific computing. High-performance computing is one of its desired features. Other characteristics include procedural and imperative programming, with later versions adding array programming, modular programming, object-oriented programming, and generic programming. It was used in avionics in the past and still exists on a few legacy systems.
Java Java was introduced in 1995 by James Gosling at Sun Microsystems. Java was based on C and C++. It is a general-purpose, high-level programming language designed with very few implementation dependencies. Other characteristics include class-based, concurrent, and object-oriented. Java uses the philosophy “write once, run anywhere” (WORA), meaning code running on one platform does not need recompilation to run on a different platform. Java applications are typically compiled into bytecode, which is stored in a class file. This bytecode can then be executed on any Java virtual machine regardless of the computer architecture. A real-time Java has been developed and a safety-critical subset is in work. Java has been used for some aviation software tools but is still not considered mature enough for safety-critical software.
Pascal Pascal was introduced in 1970 by Niklaus Wirth and was based on the ALGOL programming language. Pascal is an imperative and procedural high-level programming language. It was designed to promote good programming practices using structured programming and data structuring. Additional characteristics of Pascal include enumerations, subranges, records, dynamically allocated variables with associated pointers, and sets, which allow programmerdefined complex data structures such as lists, trees, and graphs. Pascal strongly types all objects, allows nested procedure definitions to any depth, and allows most definitions and declarations inside functions and procedures. It was used in avionics in the past and still exists on a few legacy systems.

Source: Wikipedia, Programming languages, http://en.wikipedia.org/wiki/List_of_programming_languages, accessed on April 2012.

In general, assembly should be avoided when possible. It is difficult to maintain, has extremely weak data typing, has limited or no flow control mechanisms, is difficult to read, and is generally not portable. However, there are occasions when it is needed, including interrupt handling, hardware testing and error detection, interface to processor and peripheral devices, and performance support (e.g., execution speed in critical areas) [3].

8.2.2.2 Ada

Ada was first introduced in 1983 as what is known as Ada-83. It was influenced by the ALGOL and Pascal languages and was named after Ada Lovelace, who is believed to be the first computer programmer. Ada was originally developed at the request of the U.S. Department of Defense (DoD) and was mandated by the DoD for several years. Prior to Ada’s arrival, there were literally hundreds of languages used on DoD projects. The DoD desired to standardize a language to support embedded, real-time, and mission-critical applications. Ada includes the following features: strong typing, packages (to provide modularity), run-time checking, tasks (to allow parallel processing), exception handling, and generics. Ada 95 and Ada 2005 add the object-oriented programming capability. The Ada language has been standardized through the International Standards Organization (ISO), American National Standards Institute (ANSI), and International Electrical Technical Commission (IEC) standardization efforts. Unlike most ISO/IEC standards, the Ada language definition* is publicly available for free to be used by programmers and compiler manufactures.

Ada is generally favored by those most committed to safety, because it has such a strong feature set available. It “supports run-time checks to protect against access to unallocated memory, buffer overflow errors, off-by-one errors, array access errors, and other detectable bugs” [4]. Ada also supports many compile-time checks that are not detectable until run-time in other languages or that would require explicit checks to be added to the source code. In 1997 the DoD effectively removed its mandate to use Ada. By that time, the overall number of languages used had declined and those languages remaining were mature enough to produce quality products (i.e., a few mature languages were available rather than hundreds of immature ones).

8.2.2.3 C

C is one of the most popular programming languages in history. It was originally developed between 1969 and 1973 by Dennis Ritchie at Bell Telephone Laboratories for use with the UNIX operating system. By 1973, the language was powerful enough that most of the UNIX operating system kernel was rewritten in C. This made UNIX one of the first operating systems implemented in a nonassembly language [5]. C provides low-level access to memory and has constructs that map well to machine instructions. Therefore, it requires minimal run-time support and is useful for applications that were previously coded in assembly.

Some are hesitant to call C a high-level language; instead they prefer to call it a mid-level language. It does have high-level language features such as structured data, structured control flow, machine independence, and operators [6]. However, it also has low-level constructs (e.g., bit manipulation). C uses functions to contain all executable code. C has weak typing, may hide variables in nested blocks, can access computer memory using pointers, has a relatively small set of reserved keywords, uses library routines for complex functionality (e.g., input/output [I/O], math functions, and string manipulation), and uses several compound operators (such as, + =, − =, * =, ++, etc.). Text for a C program is free-format and uses the semicolon to terminate a statement.

C is a powerful language. With that powerful capability comes the need for extreme caution when using it. The Motor Industry Software Reliability Association’s C standard (MISRA-C) [7] provides excellent guidelines to safely implement C and is frequently used as input to company-specific coding standards.

8.2.3 Choosing a Language and Compiler

When selecting a language and compiler to be used for a safety-critical project or projects, several aspects should be considered.

Consideration 1: Capabilities of the language and compiler. The language and compiler must be capable of doing their job. I once worked with a company that developed a compiler to implement a safe subset of Ada. They even went through the effort and expense to qualify it as a level A development tool. However, the Ada subset was not extensive enough to support projects in the real world; therefore, it was not utilized by the industry. Some of the basic language capabilities that are important for most projects are as follows:

  • Readability of the code—consider such things as case sensitivity and mixing, understandability of reserved words and math symbols, flexible format rules, and clear naming conventions.

  • Ability to detect errors at compile-time, such as typos and common coding errors.

  • Ability to detect errors at run-time, including memory exhaustion checks, exception handling constructs, and math error handling (e.g., number overflow, array bound violations, and divide-by-zero) [3].

  • Portability across platforms (discussed more in Consideration #9).

  • Ability to support modularity, including encapsulation and information hiding.

  • Strong data typing.

  • Well-defined control structures.

  • Support to interface with a real-time operating system (RTOS), if using an RTOS on current or future project.

  • Ability to support real-time system needs, such as multitasking and exception handling.

  • Ability to interface with other languages (such as assembly).

  • Ability to compile and debug separately.

  • Ability to interact with hardware, if not using an RTOS.

Consideration 2: Software criticality. The higher the criticality, the more controlled the language must be. A level D project might be able to certify with Java or C#, but level A projects require a higher degree of determinism and language maturity. Some compiler manufactures provide a real-time and/or safety-critical subset of their general-purpose language.

Consideration 3: Personnel’s experience. Programmers tend to think in the language they are most familiar with. If an engineer is an Ada expert, it will be difficult to switch to C. Likewise, programming in assembly requires a special skill set. Programmers are capable of programming in multiple languages, but it takes time to become proficient and to fully appreciate the capabilities and pitfalls of each language.

Consideration 4: Language’s safety support. The language and compiler must be able to meet the applicable DO-178C objectives, as well as the required safety requirements. Level A projects are required to verify the compiler output to prove that it does not generate unintended code. This typically leads organizations to select mature, stable, and well-established compilers.

Consideration 5: Language’s tool support. It is important to have tools to support the development effort. Here are some of the points to consider:

  • The compiler should be able to detect errors and support safety needs.

  • A dependable linker is needed.

  • A good debugger is important. The debugger must be compatible with the selected target computer.

  • The testability of the language should be considered. Test and analysis tools are often language specific and sometimes even compiler specific.

Consideration 6: Interface compatibility with other languages. Most projects use at least one high-level language, as well as assembly. One project that I worked on used Ada, C, and assembly for the airborne software, as well as C++, Java, and C# for the tools. The language and compiler selected must be capable of interfacing with assembly and code from other utilized languages. Typically, assembly files are linked in with the compiled code of the high-level language. Jim Cooling states it well: “One highly desirable feature is the ability to reference high-level designators (e.g. variables) from assembly code (and vice versa). For professional work this should be mandatory. The degree of cross-checking performed by the linker on the high-level and assembler routines (e.g. version numbers) is important” [3].

Consideration 7: Compiler track record. Normally, a compiler that is compatible with the international standards (e.g., ANSI or ISO) is desired. Even if only a subset of the compiler is utilized, it is important to ensure that the features used properly implement the language. Most safety-critical software developers select a mature compiler with a proven track record. For level A projects, it is necessary to show that the compiler-generated code is consistent with the source code. Not all compilers can comply with these criteria.

Consideration 8: Compatibility with selected target. Most compilers are target specific. The selected compiler and environment must produce code compatible with the utilized processor and peripheral devices. Typically, the following capabilities to access processor and devices are needed: memory access (for control of data, code, heap, and stack operations), peripheral device interface and control, interrupt handling, and support of any special machine operations [3].

Consideration 9: Portability to other targets. Most companies consider the portability of the code when selecting a language and compiler. A majority of aviation projects build upon existing software or systems, rather than developing brand new code. As time progresses, processors become more capable and older ones become obsolete. Therefore, it is important to select a language and compiler that will be somewhat portable to other targets. This is not always predictable, but it should at least be considered. For example, assembly is typically not portable, so every time the processor changes, the code needs to be modified. If the same processor family is used, the change may be minimal, but it still must be considered. Ada and C tend to be more portable but may still have some target dependencies. Java was developed to be highly portable, but, as discussed before, it is not currently ready for use in the safety-critical domain.

8.2.4 General Recommendations for Programming

This section is not intended to be a comprehensive programming guide. Instead, it provides a high-level overview of safety-critical programming practices based on the thousands of lines of code in multiple languages and across multiple companies that I have examined. These recommendations are applicable to any language and may be considered in company coding standards.

Recommendation 1: Use good design techniques. Design is the blueprint for the code. Therefore, a good design is important for the generation of good software. The programmer should not be expected to make up for the shortcomings of the requirements and design. Chapter 7 provides characteristics of good design (e.g., loose coupling, strong cohesion, abstraction, and modularity).

Recommendation 2: Encourage good programming practices. Good programmers take pride in being able to do what many consider impossible; they pull off small miracles on a regular basis. However, programmers can be an odd lot. One engineer that I worked with spent his evenings reading algorithm books. I am a geek and have a house full of books, but I do not find algorithm books that enjoyable. The following recommendations are offered to promote good programming practices:

  • Encourage teamwork. Teamwork helps to filter out the bad practices and unreadable code. There are several ways to implement teamwork, including pair programming (where coding is performed by a pair of programmers), informal reviews on a daily or weekly basis, or a mentor–trainee arrangement.

  • Hold code walkthroughs. Formal reviews are essentially required for higher software levels. However, there are great benefits to hav ing less formal reviews by small teams of programmers. In general, it is good to have every line of code reviewed by at least two other programmers. There are several benefits to this review process. First, it provides some healthy competition among peers—no one wants to look foolish in front of their peers. Second, reviews help to standardize the coding practices. Third, reviews help to increase the continuous improvement; when a programmer sees how someone else solved a problem, he or she can implement that into his or her own bag of tricks. Fourth, reviews can also increase reusability. There may be some routines that can be used among functions. Finally, reviews provide some continuity if someone leaves the team.

  • Provide good code examples. This can function as a training manual for the team. Some companies keep a best code listing. The listing is updated with good examples. This provides a training tool and encourages programmers to develop good code that might make it into the listing.

  • Require adherence to the coding standard. DO-178C requires compliance with the coding standards for levels A to C. Too often, the standards are ignored until it is time for the formal code review, when it is difficult to make changes. Programmers should be trained on the standards and required to follow them. This means having reasonable standards that can be applied.

  • Make code available to the entire team. When people know that others will be looking at their work, they are more prone to keep it cleaned up.

  • Reward good code. Recognize those who generate good code. The code quality determination is often based on the feedback of peers, the number of defects found during review, the speed of the code development, and the overall stability and maturity of the code throughout the project. The reward should be something they want. At the same time, rewards should be treated with caution, since people tend to optimize what they are being measured on. For example, basing performance on lines of code generated may actually encourage inefficient programming.

  • Encourage the team to take responsibility for their work. Hold each team member accountable for his or her work. Be willing to admit mistakes and avoid the blame game. Excuses are harmful to the team. Instead of excuses, propose solutions [8].

  • Provide opportunities for professional development. Support training and anything that will promote the professional development of your programmers.

Recommendation 3: Avoid software deterioration. Software deterioration occurs when disorder increases in the software. The progression from clean code to convoluted and incorrect code starts slowly. First, one just decides to wait to add the comments later; then he or she puts in some trial code to see how it works and forgets to remove it; etc. Before long, an unreadable and nonrepairable set of codes appears. Code deterioration occurs when details are neglected. It is remedied by being proactive. When something looks awry, deal with it; do not ignore it. If there is no time to deal with it, keep an organized listing of issues that need to be dealt with and address them before the code review.

Recommendation 4: Keep maintainability in mind throughout. Steve McConnell writes: “Keep the person who has to modify your code in mind. Programming is communicating with another programmer first and communicating with the computer second” [6]. The majority of a coder’s time is spent reviewing and modifying code, either on the first project or on a maintenance effort. Therefore, code should be written for humans, not just for machines. Many of the suggestions provided in this section will help with the maintainability. Readability and well-ordered program structures should be a priority. Decoupled code also supports maintainability.

Recommendation 5: Think the code through ahead of time. For each chapter of this book, I have spent hours researching the subject and organizing my thoughts into an outline. The actual writing occurs rather quickly once the research is done and the organization determined. The same is true when writing code.

In my experience, the most problematic code evolves when the programmers are forced to create code fast. The requirements and design are either immature or nonexistent, and the programmers just crank out code. It is like building a new house without a blueprint and without quality materials; it is going to crumble—you just do not know when.

A solid design document provides a good starting point for the programmers. However, very few will be able to directly go from design to code without some kind of intermediate step. This step could be formal or informal. It may become part of the design or may become the comments in the code itself.

McConnell recommends the Pseudocode Programming Process (PPP) [6]. The PPP uses pseudocode to design and check a routine prior to coding, reviewing, and testing the routine. The PPP uses English-like statements to explain the specific operations of the routine. The statements are language independent, so the pseudocode can be programmed in any language, and the pseudocode is at a higher level of abstraction than the code itself. The pseudocode communicates intent rather than implementation in the target language. The pseudocode may become part of the detailed design and is often easier to review for correctness than the code itself. An interesting benefit of PPP is that the pseudocode can become the outline for the source code itself and can be maintained as comments in the code. Also, pseudocode is easier to update than the code itself. For detailed steps on the PPP, see chapter 9 of Code Complete by Steve McConnell. PPP is just one of many possible approaches for creating routines or classes, but it can be quite effective. As noted in Chapter 7, using pseudocode as LLRs may present some certification challenges, as discussed in DO-248C frequently asked question (FAQ) #82. See Section 7.1.2 for a summary of the concerns.

Recommendation 6: Make code readable. Code readability is extremely important. It affects the ability to understand, review, and maintain the code. It helps reviewers to understand the code and ensure that it satisfies the requirements. It is also critical for code maintenance long after the programmer has transitioned to another project.

One time I was asked to review the code for a communication system. I read the requirements and design for a particular functionality and crawled through the code. The C code was very clever and concise. There were no comments, and white space was scarce. It was extremely difficult to determine what the code did and how it related to the requirements. So, I finally asked to have the programmer explain it. It took him a while to recall his intentions, but he was able to eventually walk me through the code. After I understood his thought process, I said something like: “the code is really hard to read.” To which he responded without hesitation: “It was hard for me to write; I figured it should be hard for others to read.” Not exactly the wisest words to say to an auditor from the Federal Aviation Administration, but at least the guy was honest. And, I think he summarized very well the mentality of many programmers.

Some programmers consider it job security to write cryptic code. However, it is really a liability to the company. It cannot be emphasized enough: code should be written for humans as well as the computer. McConnell writes: “The smaller part of the job of programming is writing a program so that the computer can read it; the larger part is writing it so that other humans can read it” [6].

Writing code to be readable has an impact on its comprehensibility, reviewability, error rate, debug ability, modifiability, quality, and cost [6]. All of these are important factors for real projects.

There are two aspects of coding that significantly impact the readability: (1) layout and (2) comments. Following is a summary of layout and comment recommendations to consider as a bare minimum. The “Recommended Reading” section of this chapter provides some other resources to consider.

  1. Code Layout Recommendations

    1. Show the logical structure of the code. As a general rule, indent statements under the statement to which they are logically subordinate. Studies show that indention helps with comprehension. Two-space to four-space indentions are typically preferred, since any more than that decreases the comprehension [6].

    2. Use whitespace generously. Whitespace includes blank lines between blocks of code or sections of related statements that are grouped together, as well as indentions to show logical structure of the code. For example, when coding a routine, it is helpful to use blank lines between the routine header, data declarations, and body.

    3. Consider modifiability in layout practices. When determining the style and layout practices, use something that is easy to modify. Some programmers like to use a certain number of asterisks to make a header look pretty; however, it can take unnecessary time and resources to modify. Programmers tend to ignore things that waste their time; therefore, layout practices should be practical.

    4. Keep closely related elements together. For example, if a statement breaks lines, break it in a readable place and indent the second line, so it is easy to see they are together. Likewise, keep related statements together.

    5. Use only one statement per line. Many languages allow multiple statements on a single line; however, it becomes difficult to read. One statement per line promotes readability, complexity assessment, and error detection.

    6. Limit statement length. In general, statements should not be over 80 characters. Lines more than 80 characters are hard to read. This is not a hard rule but a recommended general practice to increase readability.

    7. Make data declarations clear. To clearly declare data the following are suggested: use one data declaration per line, declare variables close to where they are first used, and order declarations by type [6].

    8. Use parentheses. Parentheses help to clarify expressions involving more than two terms.

  2. Code Comment Recommendations

    Code commenting is often lacking, inaccurate, or ineffective. Here are some recommendations to effectively comment code.

    1. Comment the “why.” In general, comments should explain why something is done—not how it is done. Comments ought to summarize the purpose and goal(s) (i.e., the intent) of the code. The code itself will show how it is done. Use the comments to prepare the reader for what follows. Typically, one or two sentences for each block of code are about right.

    2. Donotcommenttheobvious. Goodcodeshouldbeself-documenting, that is, readable without explanation. “For many well-written programs, the code is its own best documentation” [9]. Use comments to document things that are not obvious in the code, such as the purpose and goal(s). Likewise, comments that just echo back the code are not useful.

    3. Comment routines. Use comments to explain the purpose of routines, the inputs and outputs, and any important assumptions. For routines, it is advisable to keep the comments close to the code they describe. If the comments are all included at the top of the routine, the code may still be difficult to read and the comments may not get updated with the code. It is preferable to briefly explain the routine at the top and then include specific comments throughout the routine body. If a routine modifies global data, this is important to explain.

    4. Use comments for global data. Global data should be used cautiously; however, when they are used, they should be commented in order to ensure that they are used properly. Comment global data when they are declared, including the purpose of the data and why they are needed. Some developers even choose to use a naming convention for global data (e.g., start variables with g_). If a naming convention is not used, comments will be very important to ensure proper use of the global data [6].

    5. Do not use comments to compensate for bad code. Bad code should be avoided—not explained.

    6. Comments should be consistent with the code. If the code is updated, the comments are also updated.

    7. Document any assumptions. The programmer should document any assumptions being made and clearly identify them as assumptions.

    8. Document anything that could cause surprises. Unclear code should be evaluated to decide if it needs a rewrite. If the code is appropriate, a comment should be included to explain the rationale. As an example, performance issues sometimes drive some clever code. However, this should be the exception and not the rule. Such code should be explained with a comment.

    9. Align comments with the corresponding code. Each comment should align with the code it explains.

    10. Avoid endline comments. In general, it is best to put comments on a different line, rather than adding them at the end of the code line. The possible exceptions are data declarations and end-of-block notes for long blocks of code [6].

    11. Precede each comment with a blank line. This helps with the overall readability.

    12. Write comments to be maintainable. Comments should be written in a style that is easy to maintain. Sometimes in an effort to make things look nice, it can also make it difficult to maintain. Coders should not have to spend their precious time counting dashes and aligning the stars (asterisks).

    13. Comment proactively. Commenting should be an integral part of the coding process. If used properly, it can even become the outline for the code that the programmer then uses to organize the code. Keep in mind that if the code is difficult to comment, it might not be good code to start out with and may need to be modified.

    14. Do not overcomment. Just as too few comments can be bad, so can too many. I rarely see overcommented code, but when I do, it tends to be the result of unnecessary redundancy. Comments should not just repeat the code, but should explain why the code is needed. Studies at IBM showed that one comment per every 10 statements is the clarity peak for commenting. More or less comments reduced understandability [10]. Obviously, this is just a general guideline, but it is worth noting. Be careful not to concentrate too much on the number of comments, rather evaluate if the comments describe why the code is there.

Recommendation 7: Manage and minimize complexity. Managing and minimizing complexity is one of the most important technical topics in software development. Addressing complexity starts at the requirements and design level, but code complexity should also be carefully monitored and controlled.

When the code grows too complex, it become unstable and unmanageable, and productivity moves in the negative direction.

On one particularly challenging project, I actually had to read the convoluted code to understand what the software did. The requirements were useless, the design was nonexistent, and the code was hacked together. It took years to clean up the code, generate the requirements and design, and prove that it really did what it was supposed to do. Complexity was only one of the many challenges on this particular project.

To avoid such a scenario, I offer the following suggestions to help manage and minimize complexity:

  • Use the concept of modularity to break the problem into smaller, manageable pieces.

  • Reduce coupling between routines.

  • Use overloading of operators and variable names cautiously or prohibit altogether, when possible.

  • Use a single point of entry and exit for routines.

  • Keep routines short and cohesive.

  • Use logical and understandable naming conventions.

  • Reduce the number of nested decisions, as well as inheritance tree depth.

  • Avoid clever routines that are hard to understand. Instead, strive for simple and easy-to-understand.

  • Convert a nested if to a set of if–then–else statements or a case statement, if possible.

  • Use a complexity measurement tool or technique to identify overly complex code. Then rework code as needed.

  • Have people unfamiliar with the code review it to ensure it is understandable. It is easy to get so engrained in the data that you lose sight of the bigger picture. Independent reviews by technically competent people can provide a valuable sanity check.

  • Use modularity and information hiding to separate the low-level details from the module use. This essentially allows one to divide the problem into smaller modules (or routines, components, or classes) and hide complexity so it does not need to be dealt with each time. Each module should have a well-defined interface and a body. The body implements the module, whereas the interface is what the user sees.

  • Minimize use of interrupt-driven and multitasking processing where possible.

  • Limit the size of files. There is no magic number, but in general, files over 200–250 lines of code become difficult to read and maintain.

Recommendation 8: Practice defensive programming. DO-248C, FAQ #32 states: “Defensive programming practices are techniques that may be used to prevent code from executing unintended or unpredictable operations by constraining the use of structures, constructs, and practices that have the potential to introduce failures in operation and errors into the code” [11]. DO-248C goes on to recommend avoidance of input data errors, nondeterminism, complexity, interface errors, and logical errors during the programming process. Defensive programming increases the overall robustness of the code in order to avoid undesirable results and to protect against unpredictable events.

Recommendation 9: Ensure the software is deterministic. Safety-critical software must be deterministic; therefore, coding practices that could lead to nondeterminism must be avoided or carefully controlled (e.g., self-modifying code, dynamic memory allocation/deallocation, dynamic binding, extensive use of pointers, multiple inheritance, or polymorphism). Well-defined languages, proven compilers, limited optimization, and limited complexity also help with determinism.

Recommendation 10: Proactively address common errors. It can be beneficial to keep a log of common errors to help with training and raise awareness for the entire team, along with guidelines for how to avoid them. As an example, interface errors are common. They can be addressed by minimizing complexity of interfaces, consistently using units and precision, minimizing use of global variables, and using assertions to identify mismatched interface assumptions. As another example, common logic and computation errors might be avoided by examining accuracy and conversion issues (such as fixed-point scaling), watching for loop count errors, and using proper precision for floating-point numbers.

In his book The Art of Software Testing, Myers lists 67 common coding errors and breaks them into the following categories: data reference errors, data declaration errors, computation errors, comparison errors, control flow errors, interface errors, input/output errors, and other errors [12]. Similarly, in the book Testing Computer Software, Cem Kaner et al. identify and categorize 480 common software defects [13]. These sources provide a starting point for a common-errors list; however, they are just a starting point. The common errors will vary depending on language used, experience, system type, etc.

Recommendation 11: Use assertions during development. Assertions may be used to check for conditions that should never occur, whereas error-handling code is used for conditions, which could occur. Following are some guidelines for assertions [6,8]:

  • Assertions should not include executable code, since assertions are normally turned off at compile-time.

  • If it seems it can never happen, use an assertion to ensure that it will not.

  • Assertions are useful for verifying preand post-conditions.

  • Assertions should not be used to replace real error handling.

Recommendation 12: Implement error handling. Error handling is similar to assertions, except error handling is used for conditions, which could occur. Error handling checks for bad input data; assertions check for bad code (bugs) [6]. Data will not always come in proper format or with acceptable values; therefore, one must protect against invalid input. For example, check values from external sources for range tolerance and/or corruption, look for buffer andinteger overflows, andcheckvaluesof routine inputparameters [11]. There are a number of possible responses to an error, including return a neutral value, substitute the next piece of valid data, return the same answer as the previous time, substitute the closest legal value, log a warning message to a file, return an error code, call an error-processing routine, display an error message, handle the error locally, or shut down the system [6]. Obviously, the response to the error will depend on the criticality of the software and the overall architecture. It is important to handle errors consistently throughout the program. Additionally, ensure that the higher level code actually handles the errors that are reported by the lower level code [6]. For example, applications (higher level code) should handle errors reported by the real-time operating system (lower level code).

Recommendation 13: Implement exception handling. Exceptions are errors or fault conditions that make further execution of a program meaningless [3]. When exceptions are thrown, an exception handling routine should be called. Exception handling is one of the most effective methods for dealing with run-time problems [3]. However, languages vary on how they implement exceptions and some do not include the exception construct at all. If exceptions are not part of the language, the programmer will need to implement checks for error conditions in the code. Consider the following tips for exceptions [6,8]:

  • Programs should only throw exceptions for exceptional conditions (i.e., ones that cannot be addressed by other coding practices).

  • Exceptions should notify other parts of the program about errors that require action.

  • Error conditions should be handled locally when possible, rather than passing them on.

  • Exceptions should be thrown at the right abstraction level.

  • The exception message should identify the information that led to the exception.

  • Programmers should know the exceptions that the libraries and routines throw.

  • The project should have a standard approach for using exceptions.

Recommendation 14: Employ common coding practices for routines, variable usage, conditionals, loops, and control. Coding standards should identify the recommended practices for routines (such as functions or procedures), variable usage (e.g., naming conventions and use of global data), conditionals, control loops, and control issues (e.g., recursion, use of goto, nesting depth, and complexity).*

Recommendation 15: Avoid known troublemakers. Developing an exhaustive list of everything that can go wrong in the coding process is impossible. However, there are a few known troublemakers out there and practices to avoid them, as noted in the following:

  1. Minimize the use of pointers. Pointers are one of the most error-prone areas of programming. I cannot even begin to describe the hours I have spent tracking down issues with pointers—always staring some insane deadline in the face. Some companies choose to avoid pointers altogether. If you can find a reasonable way to work around pointers, it is definitely recommended. However, that may not always be an option. If pointers are used, their usage should be minimized, and they should be used cautiously.

  2. Limit use of inheritance and polymorphism. These are related to objectoriented technology, which is discussed in Chapter 15.

  3. Be careful when using dynamic memory allocation. Most certification projects prohibit dynamic memory allocation. DO-332 (the objectoriented supplement) provides some recommendations for how to handle it safely, if it is used.

  4. Minimize coupling and maximize cohesion. As noted in Chapter 7, it is desirable to minimize the data and control coupling between code components and to develop highly cohesive components. The design sets the stage for this concept. However, the programmers will be the ones who actually implement it. Therefore, it is worth emphasizing again. Andrew Hunt and David Thomas recommend: “Write shy code—modules that don’t reveal anything unnecessary to other modules and that don’t rely on other modules’ implementation” [8]. This is the concept of decoupled code. In order to minimize coupling, limit module interaction; when it is necessary for modules to interact, ensure that it is clear why the interaction is needed and how it takes place.

  5. Minimize use of global data. Global data are available to all design sections and, therefore, may be modified by any individual as work progresses. As already noted, global data should be minimized in order to support loose coupling and to develop more deterministic code. Some of the potential issues with global data are as follows: the data may be inadvertently changed, code reuse is hindered, initialization ordering of the global data can be uncertain, and the code becomes less modular. Global data should only be used when absolutely necessary. When it is used, it helps to distinguish it somehow (perhaps by a naming convention). Also, it is useful to implement some kind of lock or protection to control access to global data. Additionally, an accurate and current data dictionary with the global data names, descriptions, types, units, readers, and writers is important. An accurate documentation of the global data will be essential when analyzing the data coupling (which is discussed in Chapter 9).

  6. Use recursion cautiously or not at all. Some coding standards completely prohibit recursion, particularly for more critical software levels. If recursion is used, possibly for lower levels of criticality, explicit safeguards should be included in the design to prevent stack overrun due to unlimited recursion. DO-178C section 6.3.3.d promotes the prevention of “unbounded recursive algorithms” [1]. DO-248C FAQ #39 explains: “An unbounded recursive algorithm is an algorithm that directly invokes itself (self-recursion) or indirectly invokes itself (mutual recursion) and does not have a mechanism to limit the number of times it can do this before completing” [11]. The FAQ goes on to explain that recursive algorithms need an upper bound on the number of recursive calls and that it should be shown that there is adequate stack space to accommodate the upper bound [11].

  7. Reentrant functions should be used cautiously. Similar to recursion, many developers prohibit the use of reentrant code. If it is allowed, as is often the case in multithreaded code, it must be directly traceable to the requirements and should not assign values to global variables.

  8. Avoid self-modifying code. Self-modifying code is a program that modifies its instruction stream at run-time. Self-modifying code is error prone and difficult to read, maintain, and test. Therefore, it should be avoided.

  9. Avoid use of goto statement. Most safety-critical coding standards prohibit the use of goto because it is difficult to read, can be difficult to prove proper code functionality, and can create spaghetti code. That being said, if it is used, it should be used sparingly and very cautiously.*

  10. Justify any situations where you opt to not follow these recommendations. These are merely recommendations and there may be some situations that warrant violating one or more of these recommendations. However, it should also be noted that these recommendations are based on several years of practice and coordinating with international certification authorities. When opting not to follow these recommendations, be sure to technically justify it and to coordinate with the certification authority.

Recommendation 16: Provide feedback when issues are noted in requirements or design. During the coding phase, programmers are encouraged to provide feedback on any issues identified with the requirements or design. Design and coding phases are closely related and sometimes overlap. In some projects the designer is also the programmer. When the designers and programmers are separate, it is useful to include the programmers in the requirements and design reviews. This allows the programmers the opportunity to understand the requirements and design and to provide early feedback.

Once coding begins, there should be an organized way for programmers to identify issues with requirements and design and to ensure that appropriate action is taken. The problem reporting process is normally used to note issues with the requirements or design; however, there needs to be a proactive response to the identified issues. Otherwise, there is a risk that requirements, design, and code may become inconsistent.

Recommendation 17: Proactively debug the code. As will be discussed in Chapter 9, debug and developmental testing (e.g., unit testing and static code analysis) should occur during the coding phase. Do not wait until formal testing to find the code errors.

8.2.5 Special Code-Related Topics

8.2.5.1 Coding Standards

As noted in Chapter 5, coding standards are developed during the planning process to define how the selected programming language will be used on the project.* It is important to create complete coding standards and train the team how to use the standards. Typically, a company spends considerable time on their coding standards because the standards will be used on multiple projects throughout the company. The information provided in this chapter provides some concepts and issues to address in the coding standards.

I recommend including the rationale for each rule or recommendation in the standards, along with examples. Programmers are more apt to apply the standards if they understand why the suggestions are there.

Too frequently, code is written without proper attention to the standards. Then, during code reviews, significant problems are found and the code must be reworked. It is more efficient to ensure that the coders understand and apply the standards from the start.

8.2.5.2 Compiler-Supplied Libraries

Most compiler manufacturers supply libraries with their compiler to be used by the programmers to implement functionality in the code (e.g., math functions). When called, such library functions become part of the airborne software and need to satisfy the DO-178C objectives, just like other airborne software. The Certification Authorities Software Team (CAST)* position on this topic is documented in the CAST-21 paper, entitled “Compiler-Supplied Libraries.” The position basically requires the library code to meet the DO-178C objectives (i.e., it requires requirements, design, and tests for the library functions) [14]. Typically, manufacturers either develop their own libraries, including the supporting artifacts, or they reverse engineer the compiler-supplied library code to develop the requirements, design, and test cases. Functions that do not have the supporting artifacts (requirements, design, source code, tests, etc.) should either be removed from the library or purposely deactivated (removal is preferred, so the functions are not unintentionally activated in the next use of the library). Many companies develop an entire library, so it can be used on multiple projects. To make the libraries reusable, it is advis able to separate the library requirements, design, and test data from the other airborne software. Depending on the project, the libraries may need to be retested on the subsequent project (due to compiler setting differences, processor differences, etc.). For level C or D applications, it might be feasible to use testing and service history to demonstrate the library functionality. It is advisable to explain the approach for libraries in the Plan for Software Aspects of Certification (PSAC) in order to secure the certification authority’s agreement.

8.2.5.3 Autocode Generators

This chapter has mostly focused on handwritten source code. If an ACG is used, many of the concerns in this chapter should be considered when developing the ACG. Additionally, Chapter 13 provides some additional insight into the tool qualification process that may be required if the code generated by an ACG is not reviewed.

8.3 Verifying the Source Code

DO-178C Table A-5 objectives 1–6 and section 6.3.4 address the source code verification. Most of these objectives are satisfied with a code peer review (using the same basic review process discussed in Chapter 6 but with the focus on code). Each DO-178C Table A-5 objective, along with a brief summary of what is expected, is included in the following [1]:*

  • DO-178C Table A-5 objective 1: “Source Code complies with low-level requirements.” Satisfying this objective involves a comparison of the source code and low-level requirements to ensure that the code accurately implements the requirements and only the requirements. This objective is closely related to Table A-5 objective 5, since traceability helps with the compliance determination.

  • DO-178C Table A-5 objective 2: “Source Code complies with software architecture.” The purpose of this objective is to ensure that the source code is consistent with the architecture. This objective ensures that the data and control flows in the architecture and code are consistent. As will be discussed in Chapter 9, this consistency is important to support data and control coupling analyses.

  • DO-178C Table A-5 objective 3: “Source Code is verifiable.” This objective focuses on the testability of the code itself. The code needs to be written to support testing. Chapter 7 identifies characteristics of testable software.

  • DO-178C Table A-5 objective 4: “Source Code conforms to standards.” The purpose of this objective is to ensure that the code conforms to the coding standards identified in the plans. This chapter and Chapter 5 discuss the coding standards. Normally, a peer review and/or a static analysis tool are used to ensure that the code satisfies the standards. If a tool is used, it may need to be qualified.

  • DO-178C Table A-5 objective 5: “Source Code is traceable to low-level requirements.” This objective confirms the completeness and accuracy of the traceability between source code and low-level requirements. The traceability should be bidirectional. All requirements should be implemented and there should be no code that does not trace to one or more requirements. Generally, low-level requirements trace to source code functions or procedures. (See Chapter 6 for more information on bidirectional traceability.)

  • DO-178C Table A-5 objective 6: “Source Code is accurate and consistent.” This objective is a challenging one. Compliance with it involves a review of the code to look for accuracy and consistency.

However, the objective reference also states that the following are verified: “stack usage, memory usage, fixed point arithmetic overflow and resolution, floating-point arithmetic, resource contention and limitations, worst-case execution timing, exception handling, use of uninitialized variables, cache management, unused variables, and data corruption due to task or interrupt conflicts” [1]. This verification activity involves more than a code review. Chapter 9 explains some of the additional verification activities that are needed to evaluate stack usage, worst-case execution timing, memory usage, etc.

8.4 Development Integration

DO-178C identifies two aspects of integration. The first is the integration during the development effort; that is, the compiling, linking, and loading processes. The second aspect is the integration during the testing effort, which includes software/software integration and software/hardware integration. Integration typically starts by integrating software modules within a functional area on a host, then integrating multiple functional areas on the host, and then integrating the software on the target hardware. The effectiveness of the integration is proven through the testing effort. This section considers the integration during the development effort (see Figure 8.1). Chapter 9 considers integration during the testing phase.

Images

Figure 8.1 The code and integration phase.

8.4.1 Build Process

Figure 8.1 provides a high-level view of the integration process, which includes the code compilation, the linking, and the loading onto the target computer. The process of using the source code to develop executable object code is called the build process. The output of the coding phase includes the source code, as well as the compile and link instructions. The compile and link instructions are documented in build instructions. The build instructions must be well documented with repeatable steps, because they document the process used to build the executable image that will be used for safety-critical operation. DO-178C section 11.16.g suggests that the build instructions be included in the Software Configuration Index (SCI).

The build instructions often include multiple scripts (such as makefiles). Because these scripts have a crucial role in the development of the executable image(s), they should be under configuration control and reviewed for accuracy, just like the source code. Unfortunately, the review of compile and link data is frequently overlooked during the planning phases. Some organizations take great care with the source code but neglect the scripts that enable the build process altogether. When explaining source code DO-178C section 11.11 states: “This data consists of code written in source language(s). The Source Code is used with the compiling, linking, and loading data in the integration process to develop the integrated system or equipment” [1]. Therefore, the compile and link data should be carefully controlled along with the source code. This requires verification and configuration management of the data. The software level determines the extent of the verification and configuration management.

The build process relies on a well-controlled development environment. The development environment lists all tools (with versions), hardware, and settings (including compiler or linker settings) of the build environment. DO-178C suggests that this information be documented in a Software Life Cycle Environment Configuration Index (SLECI). The SLECI is discussed in Chapter 10.

Prior to building the software for release, most companies require a clean build. In this situation, the build machine is cleaned by removing all of its software. The build machine is then loaded with the approved software using the clean build procedure. This clean build ensures that the approved environment is used and that the build environment can be regenerated (which is important for maintenance). After the build machine is properly configured, the software build instructions are followed to generate the software for release. The clean build procedures are normally included or referenced in the SLECI or the SCI.

One aspect of the build process that is sometimes overlooked is the handling of compiler and linker warnings and errors. The build instructions should require that warnings and errors be examined after compiling and linking. The build instructions should also identify any acceptable warnings or the process for analyzing warnings to determine if they are acceptable. Errors are generally not acceptable.

In my experience, the clean build procedures and software build instructions are often not documented well; hence, they are not repeatable.

The build process frequently relies on the engineer who performs the build on an almost daily basis. To address this common deficiency, it is beneficial to have someone who did not write the procedures and who does not normally perform the build to execute the procedures to confirm repeatability. I recommend doing this as part to build procedures review.

8.4.2 Load Process

The load process controls the loading of the executable image(s) onto the target. There are typically load procedures for the lab, the factory, and the aircraft (if the software is field-loadable). The load procedures should be documented and controlled. DO-178C section 11.16.k explains that the procedures and methods used to load the software into the target hardware should be documented in the SCI.*

The load instructions should identify how a complete load is verified, how incomplete loads are identified, how corrupted loads are addressed, and what to do if an error occurs during the loading process.

For aircraft systems, many manufacturers use the ARINC 615A [15] protocol and a high-integrity cyclic redundancy check to ensure that the software is properly loaded onto the target.

8.5 Verifying the Development Integration

The verification of the development integration process typically includes the following activities to ensure that the integration process is complete and correct:

  • Review the compile data, link data, and load data (e.g., scripts used to automate the build and load).

  • Review the build and load instructions, including an indepen dent execution of the instructions, to ensure completeness and repeatability.

  • Analyze link data, load data, and memory map to ensure hardware addresses are correct, there are no memory overlaps, and there are no missing software components. This addresses DO-178C Table A-5 objective 7, which states: “Output of software integration process is complete and correct” [1]. These analyses are further discussed in Chapter 9.

References

1. RTCA DO-178C, Software Considerations in Airborne Systems and Equipment Certification (Washington, DC: RTCA, Inc., December 2011).

2. Wikipedia, Assembly language, http://en.wikipedia.org/wiki/Assembly_ language (accessed on January 2011).

3. J. Cooling, Software Engineering for Real-Time Systems (Harlow, U.K.: Addison-Wesley, 2003).

4. Wikipedia, Ada programming language, http://en.wikipedia.org/wiki/Ada_(programming_language) (accessed on January 2011).

5. Wikipedia, C programming language, http://en.wikipedia.org/wiki/C_(programming_language) (accessed on January 2011).

6. S. McConnell, Code Complete, 2nd edn. (Redmond, WA: Microsoft Press, 2004).

7. The Motor Industry Software Reliability Association (MISRA), Guidelines for the Use of the C Language in Critical Systems, MISRA-C:2004 (Warwickshire, U.K.: MISRA, October 2004).

8. A. Hunt and D. Thomas, The Pragmatic Programmer (Reading, MA: Addison-Wesley, 2000).

9. C. M. Krishna and K. G. Shin, Real-Time Systems (New York: McGraw-Hill, 1997).

10. C. Jones, Software Assessments, Benchmarks, and Best Practices (Reading, MA: Addison-Wesley, 2000).

11. RTCA DO-248C, Supporting Information for DO-178C and DO-278A (Washington, DC: RTCA, Inc., December 2011).

12. G. J. Myers, The Art of Software Testing (New York: John Wiley & Sons, 1979).

13. C. Kaner, J. Falk, and H. Q. Nguyen, Testing Computer Software, 2nd edn. (New York: John Wiley & Sons, 1999).

14. Certification Authorities Software Team (CAST), Compiler-supplied libraries, Position Paper CAST-21 (January 2004, Rev. 2).

15. Aeronautical Radio, Inc., Software data loader using ethernet interface, ARINC REPORT 615A-3 (Annapolis, MD: Airlines Electronic Engineering Committee, June 2007).

Recommended Reading

1. S. McConnell, Code Complete, 2nd edn. (Redmond, WA: Microsoft Press, 2004).

2. A. Hunt and D. Thomas, The Pragmatic Programmer (Reading, MA: Addison-Wesley, 2000).

3. B. W. Kernighan and R. Pike, The Practice of Programming (Reading, MA: Addison-Wesley, 1999).

4. The Motor Industry Software Reliability Association (MISRA), Guidelines for the Use of the C Language in Critical Systems, MISRA-C:2004 (Warwickshire, U.K.: MISRA, October 2004).

ISO/IEC PDTR 15942, Guide for the Use of the Ada Programming Language in High Integrity Systems, http://anubis.dkuug.dk/JTC1/SC22/WG9/documents.htm (July 1999).

6. L. Hattan, Safer C: Developing Software for High-Integrity and Safety-Critical Systems (Maidenhead, U.K.: McGraw-Hill, 1995).

7. S. Maguire, Writing Solid Code (Redmond, WA: Microsoft Press, 1993).

*It is known as the Ada Reference Manual (ARM) or Language Reference Manual (LRM).

*Although not specific for safety-critical software, Steve McConnell’s book, Code Complete [6], provides detailed recommendations on each of these topics.

Steve McConnell’s Code Complete [6] provides some suggestions for avoiding errors in pointer usage.

*In his book, Code Complete [6], Steve McConnell provides some suggestions on how to use goto conscientiously.

*DO-178C section 11.8 explains the expected contents of the standards.

*CAST is a team of international certification authorities who strive to harmonize their positions on airborne software and aircraft electronic hardware in CAST papers.

*DO-178C Table A-5 objective 7 is explained in Chapter 9.

*This was not included in DO-178B.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.231.194