Appendix D Software Service History Questions

Chapter 24 provided the four categories of questions to consider when evaluating software service history. This appendix includes the specific questions from the Federal Aviation Administration’s Software Service History Handbook [1]. The handbook is available on the Federal Aviation Administration website (www.faa.gov); however, these questions are provided for convenience.

D.1 Questions Regarding Problem Reporting [1]

  1. Are the software versions tracked during the service history duration?
  2. Are problem reports tracked with respect to particular versions of software?
  3. Are problem reports associated with the solutions/patches and an analysis of change impact?
  4. Is revision/change history maintained for different versions of the software?
  5. Have change impact analyses been performed for changes?
  6. Were in-service problems reported?
  7. Were all reported problems recorded?
  8. Were these problem reports stored in a repository from which they can be retrieved?
  9. Were in-service problems thoroughly analyzed and/or those analyses included or appropriately referenced in the problem reports?
  10. Are problems within the problem report repository classified?
  11. If the same type of problem was reported multiple times, were there multiple entries or a single entry for a specific problem?
  12. If problems were found in the lab in executing copies of operational versions of software during the service history period, were these problems included in the problem reporting system?
  13. Is each problem report tracked with the status of whether it is fixed or open?
  14. If the problem was fixed, is there a record of how the problem was fixed (in requirements, design, code)?
  15. Is there a record of a new version of software with new release after the problem was fixed?
  16. Are there problems with no corresponding record of change in software version?
  17. Does the change history show that the software is currently stable and mature?
  18. Does the product have the property of exhibiting the error with a message to the user? (Some products may not have error trapping facilities, so they may just continue executing with wrong results and with no indication of failure.)
  19. Has the vendor (or the problem report collecting agency) made it clear to all users that problems are being collected and corrected?
  20. Are all problems within the problem report repository classified?
  21. Are safety-related problems identified as such? Can safety-related problems be retrieved?
  22. Is there a record of which safety problems are fixed and which problems are left open?
  23. Is there enough data after the last fix of safety-related problems to assess that the problem has been corrected and no new safety-related problems have surfaced?
  24. Do open problem reports have any safety impact?
  25. Is there enough data after the last fix of safety-related problems to assess that the problem is solved and no new safety-related problems have surfaced?
  26. Are the problem reports and their solutions classified to indicate how a fix was implemented?
  27. Is it possible to trace particular patches with release versions and infer from design and code fixes that the new versions correspond to these fixes?
  28. Is it possible to separate the problem reports that were fixed in the hardware or change of requirements?
  29. Are problem reports associated with the solutions/patches and an analysis of change?
  30. If the solutions indicated a change in the hardware or mode of usage or requirements, is there an analysis of whether these changes invalidate the service history data before that change?
  31. Is there a fix to a problem with changes to software but with no record of the change in the software version?
  32. Is the service period defined appropriate to the nature of software in question?
  33. How many copies of the software are in use and being tracked for problems?
  34. How many of these applications can be considered to be similar in operation and environment?
  35. Are the input/output domains the same between the service duration and the proposed usage?
  36. If the input/output domains are different, can they be amended using glue code?
  37. Does the service period include normal and abnormal operating conditions?
  38. Is there a record of the total number of service calls received during the period?
  39. Were warnings and service interruptions a part of this problemreporting system?
  40. Were warnings analyzed to assure that they were or were not problems?
  41. Was there a procedure used to log the problem reports as errors?
  42. What was the reasoning behind the contents of the procedure?
  43. Is there evidence that this procedure was enforced and used consistently throughout the service history period?
  44. Does the history of warranty claims made on the product match with the kind of problems seen in the service history?
  45. Have problem reports identified as a nonsafety problem in the original domain been reviewed to determine if they are safety related in the target domain?

D.2 Questions Regarding Operation [1]

  1. Is the intended software operation similar to the usage during the service history (i.e., its interface with the external world, people, and procedures)?
  2. Have the differences between service history usage and proposed usage been analyzed?
  3. Are there differences in the operating modes in the new usage?
  4. Are only some of the functions of the proposed application used in service usage?
  5. Is there a gap analysis of functions that are needed in the proposed application but have not been used in the service duration?
  6. Is the definition of normal operation and normal operation time appropriate to the product?
  7. Does the service period include normal and abnormal operating conditions?
  8. Is there a technology difference in the usage of the product from service history duration (manual vs automatic, user intercept of errors, used within a network vs standalone, etc.)?
  9. Was operator training on procedures required in the use of the product during the recorded service history time period?
  10. Is there a plan to provide similar training in the new operation?
  11. Will the software level for the new system be the same as it was in the old system?

D.3 Questions Regarding Environment [1]

  1. Are the hardware environment of service history and the target environment similar?
  2. Have the resource differences between the two computers been analyzed (time, memory, accuracy, precision, communication services, built-in tests, fault tolerance, channels and ports, queuing modes, priorities, error recovery actions, etc.)?
  3. Are safety requirements encountered by the product the same in both environments?
  4. Are exceptions encountered by the product the same in both environments?
  5. Is the data needed to analyze similarity of environment available? (Such data are not usually a part of problem data.)
  6. Does the analysis show which portions of the service history data are applicable to the proposed use?
  7. How much service history credit can be assigned to the product, as opposed to the fault-tolerant properties of the computer environment in the service history duration?
  8. Is the product compatible with the target computer without making modifications to the product software?
  9. If the hardware environments are different, have the differences been analyzed?
  10. Were there hardware modifications during the service history period?
  11. If there were hardware modifications, is it still appropriate to consider the service history duration before the modifications?
  12. Are software requirements and design data needed to analyze whether the configuration control of any hardware changes noted in the service history are acceptable?

D.4 Questions Regarding Time [1]

  1. What is the definition of service period?
  2. Is the service period defined appropriate to the nature of software in question?
  3. What is the definition of normal operation time?
  4. Does normal operation time used in service period include normal and abnormal operating conditions?
  5. Can contiguous operation time be derived from service history data?
  6. Is “applicable-service” portion recognized from the total service history data availability?
  7. What was the criterion for evaluating the service period duration?
  8. How many copies of the software are in use and being tracked for problems?
  9. What is the duration of applicable service?
  10. Is applicable-service definition appropriate?
  11. Is this the duration used for calculation of error rates?
  12. How reliable was the means of measuring time?
  13. How consistent was the means of measuring time throughout the service history duration?
  14. Do you have a proposed accepted error rate justifiable and appropriate for the level of safety of proposed usage, before analyzing the service history data?
  15. How do you propose that this error rate be calculated, before analyzing the service history data?
  16. Is the error rate computation (total errors divided by time duration, by number of execution cycles, by number of events such as landing, by flight hours, by flight distance, or by total population operating time) appropriate to the application in question? What was the total duration of time used for this computation? Has care been taken to consider only the appropriate durations?
  17. What is the actual error rate computed after analyzing the service history data?
  18. Is this error rate greater than the proposed acceptable error rate defined in PSAC [Plan for Software Aspects of Certification]?*
  19. If the error rate is greater, was analysis conducted to reassess the error rates?

References

1. U.D. Ferrell and T.K. Ferrell, Software Service History Handbook, DOT/FAA/ AR-01/116 (Washington, DC: Office of Aviation Research, January 2002).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.31.67