7 Software Design

Acronym

CSPEC control specification
HLR high-level requirement
LLR low-level requirement
PSPEC process specification
UML Unified Modeling Language

7.1 Overview of Software Design

DO-178C takes a detour from other software development literature in the area of design. DO-178B, and now DO-178C, explains that the software design contains the software architecture and low-level requirements (LLRs). Software architecture is a commonly understood part of design; however, the term LLR has caused considerable confusion. During the DO-178C committee deliberations, there was an attempt to adjust the terminology to align with other domains and common software engineering methods, but consensus was not achieved on the adjustment. Therefore, the term LLR remains and is discussed in this chapter.

The design serves as the blueprint for the software implementation phase. It describes both how the software will be put together (architecture) and how it will perform the desired functionality (LLRs). To understand the DO-178C guidance on design, it is important to understand the two elements of the software design: the architecture and the LLRs.

7.1.1 Software Architecture

DO-178C defines software architecture as: “The structure of the software selected to implement the software requirements” [1]. Roger Pressman provides a more comprehensive definition:

In its simplest form, architecture is the structure or organization of program components (modules), the manner in which these components interact, and the structure of data that are used by the components. In a broader sense, however, components can be generalized to represent major system elements and their interactions [2].

Architecture is a crucial part of the design process. Some things to keep in mind while documenting the architecture are noted here:

First, DO-178C compliance requires that the architecture be compatible with the requirements. Therefore, some means to ensure the compatibility is needed. Oftentimes traceability or a mapping between the requirements and architecture is used.

Second, the architecture should be documented in a clear and consistent format. It’s important to consider the coder who will use the design to implement the code, as well as the developers who will maintain the software and its design in the future. The architecture must be clearly defined in order to be accurately implemented and maintained.

Third, the architecture should be documented in such a way that it can be updated as needed and possibly implemented in iterations. This may be to support an iterative or evolutionary development effort, configuration options, or the safety approach.

Fourth, different architectural styles exist. For most styles the architecture includes components and connectors. However, the type of these components and connectors depends on the architectural approach used. Most realtime airborne software uses a functional structure. In this case, components represent functions and connectors show the interfaces between the functions (either in the form of data or control).

7.1.2 Software Low-Level Requirements

DO-178C defines LLRs as: “Software requirements developed from highlevel requirements, derived requirements, and design constraints from which Source Code can be directly implemented without further information” [1]. The LLRs are a decomposition of the high-level requirements (HLRs) to a level from which code can be directly written. DO-178C essentially puts the detailed engineering thought process in the design phase and not in the coding phase. If the design is documented properly, the coding effort should be relatively straightforward. This philosophy is not without controversy. The need for two levels of software requirements (high-level and low-level) has been heavily debated. Following are 10 concepts to keep in mind when planning for and writing LLRs:

Concept 1: LLRs are design details. The word requirements can be misleading, since LLRs are part of design. It is helpful to think of them as implementation steps for the coder to follow. Sometimes LLRs are even represented as pseudocode or models. Some projects require the use of shall for their LLRs and some do not. I have seen both approaches used successfully.

Concept 2: LLRs must be uniquely identified. Since the LLRs must be traced up to HLRs and down to code, the LLRs need to be uniquely identified (this is why some organizations prefer to include shall in the requirements).

Concept 3: LLRs should have the quality attributes described in Chapters 2 and 6. However, the LLRs focus on how, not on what. The LLRs are the implementation details and should get into the details of how the software will be implemented to carry out the functionality documented in the HLRs.

Concept 4: LLRs must be verifiable. One of the reasons the word requirements was retained in DO-178C is that for levels A, B, and C, the LLRs need to be tested. This is important to consider when writing the requirements.

Concept 5: Occasionally, LLRs may not be needed at all. In some projects the HLRs are detailed enough that one can directly code from them. This is the exception and not the rule. And, as noted in Chapter 6, this approach is not recommended. HLRs should focus on functionality, whereas LLRs focus on implementation. However, if the HLRs are indeed detailed enough (which means they have likely mixed what and how), DO-178C section 5.0 does allow a single level of software requirements, if that single level can satisfy both the high-level and low-level requirements objectives (i.e., DO-178C Table A-2 objectives 1, 2, 4 and 5; Table A-3; and Table A-4). Additionally, the contents of the single level of requirements combined with the software architecture should address the guidance identified in DO-178C section 11.9 (software requirements) and 11.10 (software design). Be warned, however, that this is not always accepted by the certification authorities and could result in project restart if not properly coordinated. If this approach seems feasible for your project, be sure to explain it in your plans, justify why it will work, and detail how the DO-178C objectives will be satisfied.

Concept 6: Occasionally, LLRs may not be needed in some areas. In a few projects, the HLRs will have adequate detail to code from in most areas but might need additional refinement in some areas. That is, some HLRs need to be decomposed into LLRs and some do not. If this approach is used, it should be clear which requirements are HLRs and which are LLRs. It should also be clear when the HLRs will not be further decomposed, so the programmer knows what requirements form the foundation for the coding effort. This approach can be quite tricky to carry out in reality, so use it with extreme caution.

Concept 7: More critical software tends to need more detailed LLRs. In my experience, the higher the software criticality (levels A and B), the more detailed the LLRs need to be to completely describe the requirements and obtain the needed structural coverage. The more critical the software is, the more rigorous the criteria for structural coverage. Structural coverage is discussed in Chapter 9; however, it is something to keep in mind during the design phase.

Concept 8: Derived LLRs must be handled with care. There may be some derived LLRs that are identified during the design phase. DO-178C defines derived requirements as: “Requirements produced by the software development processes which (a) are not directly traceable to higher level requirements, and/or (b) specify behavior beyond that specified by the system requirements or the higher level software requirements” [1]. Derived LLRs are not intended to compensate for holes in the HLRs but instead represent implementation details that were not yet known during the requirements phase. The derived LLRs do not trace up to HLRs but will trace down to code. A derived LLR should be documented, identified as derived, justified as to why it is needed, and evaluated by the safety assessment team to ensure it does not violate any of the system or safety assumptions. To support the safety team, the justification or rationale should be written so that someone unfamiliar with the details of the design can understand the requirement and evaluate its impact on safety and overall system functionality.

Concept 9: LLRs are usually textual but may be represented as models. The LLRs are often expressed in textual format with tables and graphics to communicate the details as needed. As will be discussed in Chapter 14, the LLRs may be captured as models. If this is the case, the model would be classified as a design model and the guidance of DO-331 would apply.

Concept 10: LLRs may be represented as pseudocode. Sometimes, LLRs are represented as pseudocode or are supplemented with pseudocode. DO-248C FAQ #82, entitled “If pseudocode is used as part of the low-level requirements, what issues need to be addressed?” provides certification concerns with using pseudocode as the LLRs [3]. When LLRs are represented as pseudocode, the following concerns exist [3]:

  • This approach may result in a large granularity jump between the HLRs and the LLRs which could make it difficult to detect unintended and missing functionality.

  • This approach could result in insufficient architectural detail, which impacts verification, including data coupling and control coupling analysis.

  • Unique identification of LLRs may be difficult.

  • Bidirectional tracing to and from HLRs may be challenging.

  • Performing structural coverage using low-level testing is generally inadequate, since the code and pseudocode are so similar; such a testing process does not effectively detect errors, identify missing functionality, or find unintended functionality.

7.1.3 Design Packaging

The design packaging varies from project to project. Some projects integrate the architecture and LLRs, while others put them in entirely sepa rate documents. Some projects include them in the same document but locate them in different sections. DO-178C does not dictate the packaging preference. It often depends on the methodology used. Regardless of the packaging decision, the relationship between the architecture and requirements must be clear. The design (including the LLRs) will be used by those implementing the software; therefore, the coders should be kept in mind as the design is documented.

7.2 Approaches to Design

There are a variety of techniques employed by designers to model the software architecture and behavior. The two design approaches used in aviation software are structure-based and object-oriented. Some projects combine the concepts from both approaches.

7.2.1 Structure-Based Design (Traditional)

The structure-based design is common for real-time embedded software and uses some or all of the following representations:

  • Data context diagram—the top-level diagram which describes the functional behavior of the software and shows the data input and output from the software.

  • Data flow diagram—a graphical representation of the processes performed by the software, showing the flow of data between processes. It is a decomposition of the data context diagram. The data flow diagram is typically represented in multiple levels, each level going into more detail. Data flow diagrams include processes, data flows, and data stores.

  • Process specification (PSPEC)—accompanies the data flow diagram and shows how the output for the processes are generated from the given inputs [4].

  • Control context diagram—the top-level diagram which shows the control of the system by establishing the control interfaces between the system and its environment.

  • Control flow diagram—the same diagram as the data flow diagram, except the flow of control through the system is identified rather than the flow of data.

  • Control specification (CSPEC)—accompanies the control flow diagram and shows how the output for the processes are generated from the given inputs [4].

  • Decision table (also called a truth table)—shows the combinations of decisions made based on given input.

  • State transition diagram—illustrates the behavior of the system by showing its states and the events that cause the system to change states. In some designs this might be represented as a state transition table instead of a diagram.

  • Response time specification—illustrates external response times that need to be specified. It may include event-driven, continuous, or periodic response times. It identifies the input event, output event, and the response time for each external input signal [4].

  • Flowchart—graphically represents sequence of software actions and decisions.

  • Structure chart—illustrates the partitioning of a system into modules, showing their hierarchy, organization, and communication [5].

  • Call tree (also called call graph)—illustrates the calling relationships between software modules, functions, or procedures.

  • Data dictionary—defines the data and control information that flow through the system. Typically includes the following information for each data item: name, description, rate, range, resolution, units, where/how used, etc.

  • Textual details—describes implementation details (e.g., the LLRs).

  • Tasking diagram—shows the characteristics of tasks (e.g., sporadic or periodic), task procedures, input/output of each task, and any interactions with the operating system (such as semaphores, messages, and queues).

7.2.2 Object-Oriented Design

Object-oriented design techniques may use these representations [2]:

  • Use case—accompanies the requirements to graphically and textually explain how a user interacts with the system under specific circumstances. It identifies actors (the people or devices that use the system) and describes how the actor interacts with the system.

  • Activity diagram—supplements the use case by graphically representing the flow of interaction within a scenario. It shows the flow of control between actions that the system performs. An activity diagram is similar to a flowchart, except the activity diagram also shows concurrent flows.

  • Swimlane diagram—a variation of the activity diagram; it shows the flow of activities described by the use case and simultaneously indicates which actor is responsible for the action described by an activity. It basically shows the activities of each actor in a parallel fashion.

  • State diagram—like the state transition diagram described earlier, the object-oriented state diagram shows the states of the system, actions performed depending on those states, and the events that lead to a state change.

  • State chart—an extension of the state diagram with added hierarchy and concurrency information.

  • Class diagram—a Unified Modeling Language (UML) approach which models classes (including their attributes, operations, and relationships and associations with other classes) by providing a static or structural view of a system.

  • Sequence diagram—shows the communications between objects during execution of a task, including the temporal order in which messages are sent between the objects to accomplish that task.

  • Object-relationship model—a graphical representation of the connections between classes.

  • Class-responsibility-collaborator model—provides a way to identify and organize the classes that are relevant to the requirements. Each class is represented as a box, sometimes referred to as an index card; each box includes class name, class responsibilities, and collaborators. The responsibilities are the attributes and operations relevant for the class. Collaborators are those classes that are required to provide information to another class to complete its responsibility. A collaboration is either a request for information or a request for some action.

Chapter 15 provides more information on object-oriented technology.

7.3 Characteristics of Good Design

DO-178C offers flexibility for documenting the design. Rather than go into detail on design techniques, which are available in many other books, let’s consider the characteristics that a good software design possesses.

Characteristic 1: Abstraction. A good design implements the concept of abstraction at multiple levels. Abstraction is the process of defining a program (or data) with a representation similar to its meaning (semantics), while hiding the implementation details. Abstraction strives to reduce and factor out details so that the designer can focus on a few concepts at a time. When abstraction is applied at each hierarchical level of the development, it allows each level to only deal with the details that are pertinent to that level. Both procedural and data abstraction are desirable.

Characteristic 2: Modularity. A modular design is one where the software is logically partitioned into elements, modules, or subsystems (often referred to as components). (A component may be a single code module or a group of related code modules.) The overall system is divided by separating the features or functions. Each component focuses on a specific feature or function. By separating the features and functionality into smaller, manageable components, it makes the overall problem less difficult to solve. Pressman writes:

You modularize a design (and the resulting program) so that development can be more easily planned; software increments can be defined and delivered; changes can be more easily accommodated; testing and debugging can be conducted more efficiently; and long-term maintenance can be conducted without serious side effects [2].

When a design is properly modularized, it is fairly simple to understand the purpose of each component, verify the correctness of each component, understand the interaction between components, and assess the overall impact of each component on the software structure and operation [6].

Characteristic 3: Strong cohesion. To make a system truly modular, the designer strives for functional independence with each component. This is carried out by cohesion and coupling. Good designs strive for strong cohesion and loose coupling. “Cohesion may be viewed as the glue that keeps the component together” [7]. Even though DO-178C doesn’t require an evaluation of a component’s cohesiveness, it should be considered during design because it will affect the overall quality of the design. Cohesion is a measure of the component’s strength and acts like a chain holding the component’s activities together [5]. Yourdon and Constantine define seven layers of cohesion, with cohesion becoming weaker as you go down the list [8]:

  • Functional cohesion: All elements contribute to a single function; each element contributes to the execution of only one task.

  • Sequential cohesion: The component consists of a sequence of elements where the output of one element serves as input to the next element.

  • Communicational cohesion: The elements of a component use the same input or output data but order is not important.

  • Procedural cohesion: The elements are involved in different and possibly unrelated activities that must be executed in a given order.

  • Temporal cohesion: The elements are functionally independent but their activities are related in time (i.e., they are carried out at the same time).

  • Logical cohesion: Elements include tasks that are logically related. A logically cohesive component contains a number of activities of the same general kind; the user picks what is needed.

  • Coincidental cohesion: Elements are grouped into components in a haphazard way. There is no meaningful relationship between the elements.

Characteristic 4: Loose coupling. Coupling is the degree of interdependence between two components. A good design minimizes coupling by eliminating unnecessary relationships, reducing the number of necessary relationships, and easing the tightness of necessary relationships [5]. Loose coupling helps to minimize the ripple effect when a component is modified, since the component is easier to comprehend and adapt. Therefore, loose coupling is desired for effective and modular design.

DO-178C defines two types of coupling [1]:

  • Data coupling: “The dependence of a software component on data not exclusively under the control of that software component.”

  • Control coupling: “The manner or degree by which one software component influences the execution of another software component.”

However, software engineering literature identifies six types of coupling, which are listed in the following starting with the tightest coupling and going to the loosest [5,7]:

  • Content coupling: One component directly affects the working of another component, since one component refers to the inside of the other component.

  • Common coupling: Two components refer to the same global data area. That is, the components share resources.

  • External coupling: Components communicate through an external medium, such as a file or database.

  • Control coupling (not same as DO-178C definition): One component directs the execution of another component by passing the necessary control information.

  • Stamp coupling: Two components refer to the same data structure. This is sometimes called data structure coupling.

  • Data coupling (not same as DO-178C definition): Two components communicate by passing elementary parameters (such as a homogeneous table or a single field).

Unfortunately, DO-178C overloads the terms data coupling and control coupling. The DO-178C use of data coupling is comparable to the mainstream software engineering concepts of data, stamp, common, and external coupling. The DO-178C use of control coupling is covered by software engineering concepts of control and content coupling [9]. Data and control coupling analyses are discussed in Chapter 9 since these analyses are part of the verification phase.

A well-designed software product strives for loose coupling and strong cohesion. Implementing these characteristics helps to simplify the communication between programmers, make it easier to prove correctness of components, reduce propagation of impact across components when a component is changed, make components more comprehensible, and reduce errors [7].

Characteristic 5: Information hiding. Information hiding is closely related to the concepts of abstraction, cohesion, and coupling. It contributes to modularity and reusability. The concept of information hiding suggests that “modules [components] should be specified and designed so that information (e.g., algorithms and data) contained within a module [component] is inaccessible to other modules [components] that have no need for such information” [2].*

Characteristic 6: Reduced complexity. Good designs strive to reduce complexity by breaking the larger system into smaller, well-defined subsystems or functions. While this is related to the other characteristics of abstraction, modularity, and information hiding, it requires a conscientious effort by the designer. Designs should be documented in a straightforward and understandable manner. If the complexity is too great, the designer should look for alternate approaches to divide the responsibilities of the function into several smaller functions. Overly complex designs result in errors and significant impacts when change is needed.

Characteristic 7: Repeatable methodology. Good design is the result of a repeatable method driven by requirements. A repeatable method uses welldefined notations and techniques that effectively communicate what is intended. The methodology should be identified in the design standards.

Characteristic 8: Maintainability. The overall maintenance of the project should be considered when documenting the design. Designing for maintainability includes designing the software to be reused (wholly or partially), loaded, and modified with minimal impact.

Characteristic 9: Robustness. The overall robustness of the design should be considered during the design phase and clearly documented in the design description. Examples of robust design considerations include off-nominal functionality, interrupt functionality and handling of unintended interrupts, error and exception handling, failure responses, detection and removal of unintended functionality, power loss and recovery (e.g., cold and warm start), resets, latency, throughput, bandwidth, response times, resource limitations, partitioning (if required), deactivation of unused code (if deactivated code is used), and tolerances.

Characteristic 10: Documented design decisions. The rationale for decisions made during the design process should be documented. This allows proper verification and supports maintenance.

Characteristic 11: Documented safety features. Any design feature used to support safety should be clearly documented. Examples include watchdog timers, cross channel comparisons, reasonability checks, built-in test processing, and integrity checks (such as cyclic redundancy check or checksum).

Characteristic 12: Documented security features. With the increasing sophistication of hackers, the design should include protection from vulnerabilities and detection mechanisms for attacks.

Characteristic 13: Reviewed. Throughout the design phase, informal technical reviews should be performed. During these informal reviews, the reviewers should seriously evaluate the quality and appropriateness of the design. Some designs (or at least portions of them) may need to be discarded in order to arrive at the best solution. Iterative reviews with technically savvy engineers help to identify the best and optimal design sooner, hence reducing issues found later during the formal design review and during testing.

Characteristic 14: Testability. The software should be designed to be testable. Testability is how easily the software can be tested. Testable software is both visible and controllable. Pressman notes that testable software has the following characteristics [2]:

  • Operability—the software does what it is supposed to as per the requirements and design.

  • Observability—the software inputs, internal variables, and outputs can be observed during execution to determine if tests pass or fail.

  • Controllability—the software outputs can be altered using the given inputs.

  • Decomposability—the software is constructed as components that can be decomposed and tested independently, if needed.

  • Simplicity—the software has functional simplicity, structural simplicity, and code simplicity. For example, simple algorithms are more testable than complex algorithms.

  • Stability—the software is not changing or is only changing slightly.

  • Understandability—good documentation helps testers to understand the software and therefore to test it more thoroughly.

Some features that may help make the software testable are the following [10]:

  • Error or fault logging. Such functionality in the software may provide the testers a way to better understand the software behavior.

  • Diagnostics. Diagnostic software (such as code integrity checks or memory checks) can help identify problems in the system.

  • Test points. These provide hooks into the software that can be useful for testing.

  • Access to interfaces. This can help with interface and integration testing.

Coordination between the test engineers and the designers can help make the software more testable. In particular, test engineers should be involved during the design reviews and even sooner if possible.

Characteristic 15: Avoids undesired features. Typically the following are prohibited in safety-critical designs:

  • Recursive function execution, that is, functions that can call themselves, either directly or indirectly. Without extreme caution and specific design actions, the use of recursive procedures may result in unpredictable and potentially large use of stack space.

  • Self-modifying code is code that alters its own instructions while it is executing, normally to reduce the instruction path length, to improve performance, or to reduce repetitively similar code.

  • Dynamic memory allocation, unless the allocation is done only once in a deterministic fashion during system initialization. DO-332 describes concerns and provides guidance on dynamic memory allocation.

7.4 Design Verification

As with the software requirements, the software design needs to be verified. This is typically carried out through a peer review. The recommendations for peer reviews in Chapter 6 also apply to design reviews. During the design review, both the LLRs and the architecture are evaluated. The DO-178C Table A-4 objectives for the design verification are listed and explained in the following [1]:

  • DO-178C Table A-4 objective 1: “Low-level requirements comply with high-level requirements.” This ensures that the LLRs completely and accurately implement the HLRs. That is, all functionality identified in the HLRs has been identified in the LLRs.

  • DO-178C Table A-4 objective 2: “Low-level requirements are accurate and consistent.” This ensures that the LLRs are error free and consistent with themselves, as well as with the HLRs.

  • DO-178C Table A-4 objective 3: “Low-level requirements are compatible with target computer.” This verifies any target dependencies of the LLRs.

  • DO-178C Table A-4 objective 4: “Low-level requirements are verifiable.” This typically focuses on the testability of the LLRs. For lev els A–C the LLRs will need to be tested. Therefore, verifiability needs to be considered during the initial development of the LLRs. (Characteristic #14 in Section 7.3 provides additional discussion on testability.)

  • DO-178C Table A-4 objective 5: “Low-level requirements conform to standards.” Chapter 5 discusses the development of the design standards. During the review, the LLRs are evaluated for their conformance to the standards.*

  • DO-178C Table A-4 objective 6: “Low-level requirements are traceable to high-level requirements.” This is closely related to Table A-4 objective 1. The bidirectional traceability between HLRs and LLRs supports the compliance of the LLRs to the HLRs. During the review, the accuracy of the traces is also verified. Any traces that are unclear should be evaluated and either modified or explained in the rationale. Traceability concepts are discussed in Chapter 6.

  • DO-178C Table A-4 objective 7: “Algorithms are accurate.” Any mathematical algorithms should be reviewed by someone with the appropriate background to confirm the accuracy of the algorithm. If an algorithm is being reused from a previous system that was thoroughly reviewed and the algorithm is unchanged, the review evidence from the previous development may be used. The reuse of such verification evidence should be noted in the plans.

  • DO-178C Table A-4 objective 8: “Software architecture is compatible with high-level requirements.” Oftentimes, there is a tracing or mapping between the requirements and architecture to help confirm their compatibility.

  • DO-178C Table A-4 objective 9: “Software architecture is consistent.” This verification objective ensures that the components of the software architecture are consistent and correct.

  • DO-178C Table A-4 objective 10: “Software architecture is compat ible with target computer.” This confirms that the architecture is appropriate for the specific target on which the software will be implemented.

  • DO-178C Table A-4 objective 11: “Software architecture is verifiable.” As previously noted, testability should be considered when developing the architecture. (Characteristic #14 in Section 7.3 provides additional discussion on testability.)

  • DO-178C Table A-4 objective 12: “Software architecture conforms to standards.” The design standards should be followed when developing the architecture. For levels A, B, and C, both the architecture and the LLRs are assessed for compliance with the design standards.

  • DO-178C Table A-4 objective 13: “Software partitioning integrity is confirmed.” If partitioning is used, it must be addressed in the design and verified. Chapter 21 examines the partitioning topic.

A checklist is usually used during the design review to assist engineers in their thorough evaluation of the design. The earlier recommendations (Section 7.3) and the DO-178C Table A-4 objectives provide a good starting point for the design standards and review checklist.

References

1. RTCA DO-178C, Software Considerations in Airborne Systems and Equipment Certification (Washington, DC: RTCA, Inc., December 2011).

2. R.S. Pressman, Software Engineering: A Practitioner’s Approach, 7th edn. (New York: McGraw-Hill, 2010).

3. RTCA DO-248C, Supporting Information for DO-178C and DO-278A (Washington, DC: RTCA, Inc., December 2011).

4. K. Shumate and M. Keller, Software Specification and Design: A Disciplined Approach for Real-Time Systems (New York: John Wiley & Sons, 1992).

M. Page-Jones, The Practical Guide to Structured Systems Design (New York: Yourdon Press, 1980).

6. J. Cooling, Software Engineering for Real-Time Systems (Harlow, U.K.: Addison-Wesley, 2003).

7. H.V. Vliet, Software Engineering: Principles and Practice, 3rd edn. (Chichester, U.K.: Wiley, 2008).

8. E. Yourdon and L. Constantine, Structured Design: Fundamentals of a Discipline of Computer Program and Systems Design (New York: Yourdon Press, 1975).

9. S. Paasch, Software coupling, presentation at Federal Aviation Administration Designated Engineering Representative Conference (Long Beach, CA: September 1998).

10. C. Kaner, J. Bach, and B. Pettichord, Lessons Learned in Software Testing (New York: John Wiley & Sons, 2002).

*Brackets added for consistency.

*It should be noted that some projects apply the requirements standards to the LLRs rather than the design standards.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.36.194