8
Validation by Modelling and Physical Testing

8.1 Introduction

Chapter 6 described how the concepts for new products are formulated and Chapter 7 spoke about identifying the engineering risks that might need to be overcome for those products to be robust offerings to the marketplace. This chapter is concerned with the analysis, modelling, and test work, which could be used to identify and overcome those risks before new products or services are launched. The aim, as always, is to maximise the reliability of new products at their points of launch.

The approach is sometimes described in a V‐shaped or ‘waterfall’ model, as indicated below (see Figure 8.1):

Waterfall development model labeled DESIGN with a downward arrow on the left leg (dark shade) and VALIDATE with an upward arrow on the right leg (light shade) of the V. At the center are the labels Complete System, System, etc.

Figure 8.1 Waterfall development model.

The left leg of the V signifies the design phase of the development, starting with the overall product and progressing through systems and assemblies down to the individual details. Then, the right leg of the V signifies the development and validation work, which is undertaken to identify any issues or problems that the design might possess, starting at the level of the individual component and working up to the complete system.

This is a good overview of what happens, and this approach is also used in other fields such as software development, but it does simplify how validation takes place, making it seem more of a neat, serial activity than is actually the case. Rather than just being a ‘box‐ticking’ activity, it is, in practice, more of an iterative development process. However, it is the case that validation‐type activities do take place at the level of the individual component, the system, and the complete product. It highlights that components must work individually but also must work together as a complete system.

The different forms of validation are described below, after confirming the overriding purpose of the work.

8.2 Purpose of Development and Validation Work

In simple terms, the purpose of development and validation is to confirm that the product performs as intended and will continue to do so reliably throughout its working life. In summary, validation will cover:

  • Performance testing to verify that the product or system functions as intended and delivers to the expectations of customers
  • Testing to confirm that it will meet any relevant legislative, safety, or regulatory requirements, including formal acceptance tests
  • Life testing to confirm that the product will continue to function to its requirements throughout its life and that the life is as expected
  • Extreme testing, to confirm that the product will continue to function at the extremes of its operating envelope or when subject to abuse, up to a pre‐determined level, or when operating in different environmental conditions
  • Reliability testing to prove that the product will function dependably, in terms of failure rate or other similar measurements, throughout its design life
  • Confirmation testing to demonstrate that the product made by volume production methods performs in the same way as those made by earlier methods

The validation activity, whatever form it takes, is intended to identify problems where the product does not conform to its requirements, understand their causes, and then test solutions to them. Many of the requirements may take the form of standards, either from public sources or applying within the organisation, in the case of larger companies. There should also be a particular focus on the risks identified earlier, with the intention of ensuring that they do not materialise as real‐life problems. Both the standards and the identified risks bring, in effect, experiences of the past into the present. The intention is to ensure that any problems or defects are not allowed to escape and be detected by the customer.

The discipline can also be described as ‘reliability engineering’, and this title has some validity in the sense that the overriding goal is to achieve a reliable product. For the purposes of this book, however, a narrower understanding is taken of ‘reliability’; as indicated above, if the term is used to describe the numerical failure rate (actually, one minus the failure rate see Section 8.13) associated with the product or system.

8.3 Methods

For the purposes of discussion in this book, methods of development and validation are broken down into three distinct groups of activity:

  • Calculation
  • Modelling and simulation
  • Physical testing

These groups are not mutually exclusive; many components or systems will be the subject of all three types of development, and in the order suggested. This is also the approximate order of increasing cost, so getting things right at the early stages will save money later, especially if repeat test work can be avoided. Experience accumulated over recent decades has shown that better modelling and simulation techniques have reduced substantially the need for later physical testing, although many organisations still want the confidence of physical sign‐off, as is the case with many regulatory requirements. Modelling has also made possible solutions where trial‐and‐error physical testing is just not possible, space travel, and nuclear power being two examples.

8.4 Validation and Test Programmes

Stating the obvious, an integrated programme of development, validation, and testing should be drawn up as early as possible in a technology or product programme. This should be based on established practices and previous company learning but also covering the points raised in the risk analysis, some of which might be outside the organisation's earlier experience or be particularly critical to the success of the product. The programme will require several iterations and will only be capable of being drawn up in full detail when the technology has matured to TRL 5 or 6.

Validation work of this type is time consuming and often needs specialised skills and resources such as computing power and test facilities. If third‐party facilities are needed, booking well ahead may be needed. Test items will also need to be made and quantities estimated. In the case of both resources and material, allowance should be made for failures and repeat tests, noting that the purpose of much testing is to cause failures and understand failure mechanisms rather than just run through a programme unscathed. One of the key aims of the programme should be to show up failure mechanisms as early as possible.

8.5 Engineering Calculation

The use of fundamental engineering theory is the most basic means by which the behaviour of new products can be predicted and evaluated prior to their physical realisation. Development of new ideas, for example using sketches and simple drawings, goes hand‐in‐hand with basic engineering calculations (see Figure 8.2).

Image described by caption.

Figure 8.2 Engineering calculation – Sir Frank Whittle at work ‐ before the days of computers!

The methods available are too numerous to list; they cover all areas of engineering from structures, materials, mechanics, fluid flow, and thermodynamics to electronics, turbo‐machinery and nuclear engineering. The personal computer, and associated software, has revolutionised this unseen aspect of engineering by automating, and improving the accuracy, of much of the hard work that was formerly the province of the slide rule and data sheets.

Whilst spreadsheets are the obvious starting point for engineering analysis, most of the basic engineering theory is available, in effect pre‐programmed, in proprietary software packages. Spreadsheets are good for purely numerical calculations, but are not as well suited to calculus and differential equations. Individual companies then have their own privately developed versions to support, confidentially, their own specialised in‐house needs. However, proprietary software is no guarantee of accuracy, even if it is based on fully accepted theory. Correlation of new methods with real life is vital, and checking of calculations by more experienced staff, using their experience and common sense, is still an important part of the engineering quality system.

8.6 Modelling and Simulation

Computer‐based modelling and simulation tools are now in ubiquitous use across the spectrum of engineering disciplines. The boundary between the calculation methods noted above and modelling is a matter of debate. Models are generally used to take analysis to a higher level of complexity and detail, examining complete products, systems, processes, and their performance. Conversely, calculation methods, as defined in this book, tend to have narrower boundaries and can be done relatively quickly. The latter is therefore more suited to the earlier stages of engineering development.

As an aside, it should be noted, however, that before the advent of computers, some very complex analyses were actually undertaken by hand. Nevil Shute's autobiographical book Slide Rule (Ref. 1) describes how the statically indeterminate R100 airship structure was analysed in the 1920s by a team of ‘calculators’ (Shute was ‘chief calculator’) over many weeks and with some errors that required backtracking. A finite element structural model would do the job today in a matter of minutes. The 2016 film Hidden Figures tells a similar story about calculation methods in the early days of NASA.

Models do take some investment to set up and depend on some level of engineering detail being available for the product or system being modelled. They are usually constructed by building up the model from elements representing subsystems or components of the product. Once in place, however, models can be run to produce simulations (simulation is the term used to describe the running of mathematical models) of a wide range of scenarios or conditions to explore a system's behaviour. This may include simulations of dangerous conditions or may replace physical tests that could consume a lot of expensive material, e.g. crash testing. Once established, they can be updated and re‐run as a product develops, and their accuracy can be improved as physical data become available. Some software includes the ability to perform re‐runs automatically and for the software to home in on the optimal solution.

Within the field of engineering, models are widely used. Examples of topics include engineering structures (see Figure 8.3), fluid flow, dynamic behaviour, electronic circuit performance, and atmospheric dispersion of gases. Outside engineering, models are available for such diverse topics as weather forecasting, the spread of viruses, the first milliseconds of the universe, social behaviour, and traffic flow!

Image described by caption.

Figure 8.3 Example of finite element model of part of Rail Carbody Structure.

A distinction is usually drawn between models that are essentially static, representing one set of conditions, and those that are dynamic, tracing behaviour over a period of time. A static structural model would be an example of the first and a weather forecasting model an example of the second. The latter generally requires much more computing power, and a further distinction is drawn between those that move forward in discrete time steps versus those that produce a time‐continuous output.

The growth of modelling and simulation has been facilitated by the reducing cost, and increasing availability, of computing power. This, in turn, has been supported by software developments that provide the mathematical models themselves around which the software operates. Often, this software also automates the initial compilation of models and provides results in impressive, easily digestible, and graphic form.

This then introduces one of the downsides of readily available modelling software: how accurate are they? The adage ‘garbage in, garbage out’ is very relevant. It could be argued that models are only as good as the correlation that has been made with the physical world. Where this correlation is in place, which is the case with well‐established applications, models can go a long way towards replacing physical testing, as well as being able to model situations where physical testing would be hazardous, impossible, or just too expensive. However, a word of caution is appropriate: checking of simulations, both the models and the results, by more experienced staff is another important part of the engineering quality system.

A further extension of modelling, which some would consider a separate discipline, is ‘virtual reality’. The ability to ‘see’ a product or system in three dimensions opens up the possibility of trialling its use and exploring its serviceability. Again, problems can be discovered at an earlier stage of development ahead of expensive hardware commitments.

The field of modelling is always moving forward with developments into increasingly difficult and complex areas, often requiring the use of high‐powered supercomputers. For most engineering applications, however, tried and tested models and software are readily available.

The concepts described above relate mainly to prelaunch engineering development aimed at ensuring the integrity of the final product. The use of modelling and simulation techniques is being extended into the post‐delivery phase by creating what are described as ‘digital twins’. These models are constantly used, updated, and refined based on in‐service data, supporting the operation of the product in the field and providing feedback for use in future design work – practical examples of the use of Internet of Things and Big Data technologies. The same approach can be taken in creating digital twins of manufacturing systems.

8.7 Physical Testing

Trialling of a new technology or product in physical form is the ultimate test of the idea and provides the closest match with the real‐world operation. A trial programme could include testing by customers, in cooperation with the developer, in normal operating conditions.

A complete prototype of a complicated product may also be the first opportunity to test all the elements of a product together to reveal any complex and detrimental interactions between the systems making up the product. These interactions are difficult to model either by calculation or simulation. In practice, test work can be carried out on components, systems, or complete products, and may be conducted using laboratory rigs or under field conditions.

Test work can aim to prove a number of aspects of a new design, and different approaches are required for each. In principle, testing can examine:

  • Functional performance
  • Life
  • Reliability
  • Environmental performance
  • Serviceability

Performance testing has the aim of proving that the design provides the benefits expected by, or specified to, the end customer. Instrumenting test pieces, cycling them through the product's operating envelope, and analysing the data are the means in principle by which questions are answered. Established companies have batteries of test codes for doing this work, often developed over many years and, in effect, representing the accumulated experience of the organisation in making reliable and successful products. Compliance with legislation may also need to be demonstrated by testing to internationally agreed test codes. Such tests are normally witnessed by the certifying body.

In the early stages of development, performance testing might be confined to relatively ‘normal’ operating conditions, representative of a careful operator in a controlled environment. This may be sufficient to prove the concept. However, a proportion of products will be abused, overloaded, or operated in some way outside a normal duty cycle. This could also include operation in hostile environments which might include extremes of:

  • Temperature
  • Humidity
  • Vibration and shock
  • Electrical surges and electromagnetic pulses
  • Dust, salt, and dirt

Alternatively, a potentially dangerous condition may arise by accident and the product or system is expected to deal with it. Testing in abnormal conditions should therefore be planned. This could include extremes of temperature (hot and cold), extremes of load, electrical extremes, edges of the flight envelope in the case of an aircraft, or combinations of conditions that may be unlikely but nonetheless could arise. The risk analysis should define these requirements.

A conscious choice can be made about the extent to which these extremes should be covered. Regulatory requirements specify what is required in many industries but, where this is not the case, it may be uncompetitive to provide too robust a product. Nonetheless, it would be naïve to think that customers or operators are always very careful, and they do have a habit of pushing products to their limits. Alternatively, difficult or dangerous conditions may arise by accident, and the product must have the capacity to deal with them. The latter is more of an issue for completely new technologies and companies; established industries usually have empirical rules that cover this issue.

Testing of this type can be quite close to being representative of real life. However, whilst prototype material may look exactly like the final, production version, methods of manufacture will almost certainly be different. This can impact the accuracy of results, and some repeat testing of early production material should be considered.

Life testing presents more problems. If a product is expected to last 10–30 years, how can this be compressed into a test programme lasting a few months or a few years? There are several ways of addressing this issue. The simplest way is some form of overload testing, where a product is put through a much more arduous duty cycle than it would experience in real life. Crude though this approach may sound, there is some science behind it in certain circumstances. For example, if the life of a product's structure is determined by fatigue behaviour, accelerated overload testing, in effect, misses out the nondamaging low‐level loads and just concentrates on the high‐level damaging loads. This is the thinking behind pavé testing of road vehicles, which was introduced as an empirically based accelerated test method but which now has some theory and correlation to support it. This type of test can accelerate life by factors between 50 and 100. Care must be exercised to ensure that overload testing does not create unrepresentative failure modes.

An alternative, used in the aerospace industry, is to have a long‐running fatigue test using realistic, in‐service loads but always keeping ahead of flying aircraft. With this type of test, the timescale acceleration comes from concentrating on take‐off/de‐pressurisation/pressurisation/landing cycles, missing out the steady‐state conditions.

If the product has a service life that involves intermittent use, e.g. a domestic appliance, then test units could be run on a continuous basis to accumulate running hours to match the full life in a short period. In this situation, though, care must be taken to simulate start‐up and shut‐down, which may be the determining factors in setting a product's life. In a similar way, locks, mechanisms, hinges, catches, and doors can be tested cyclically with a simple test rig, which can produce 5000–10 000 cycles per day. Tests of this type are usually stopped periodically to examine parts for wear or damage.

A further approach, more suited to components rather than larger products, is accelerated testing in climatic chambers where heat, cold, humidity, corrosive effects, or dust may be simulated. Such chambers can simulate a full life in a matter of months. Their correlation with real life is not good, but they are a start point and can be used to compare different approaches.

The points made above relate initially to mechanical systems and components. In the case of electronic systems (Ref. 2), the same points are relevant but there are some differences of emphasis compared with mechanical systems. Testing basic functionality, which could be complex and detailed, is the first requirement. In relation to measurement of life, electronic components are less susceptible to wear‐out but more susceptible to early‐life failures, which are often screened out by a burn‐in programme in the case of critical systems. Electronic systems, connectors, and harnesses benefit from development in relation to the installation environment (heating and cooling), vibration, dirt and moisture ingress, and electromagnetic susceptibility (as well as electromagnetic output from the system itself.)

The methods already described apply where the product is developed principally in the organisation's own environment and not released to customers, other than in very controlled circumstances, until it has achieved a high level of integrity.

8.8 Prototypes Not Possible?

A further consideration is what to do if prototype development is difficult or impossible. For example, fully representative prototypes of major chemical plants, bridges, space vehicles, and nuclear submarines are just not possible. Conversely, prototypes of road vehicles and aircraft can be created readily and tested in their normal working environment, or in specialised test facilities where necessary. However, there are practical limitations imposed by costs – commercial aircraft cost tens of millions of dollars.

Other products, such as rail vehicles, find extended running of prototypes more difficult. There are some dedicated rail test tracks, two in Europe, for example, but these have some limitations in terms of length and features. As commented in Chapter 3, when discussing the advanced passenger train, testing new products of this type in normal service with fare‐paying passengers is not a good idea.

In principle, several approaches can be taken, driven by the volume of the product and the characteristics of the market:

  • Where the product is going to a high‐volume consumer market, the expectation is that a reliable product will be available in quantity from the day of the launch. In this situation, the product needs to be fully qualified before the commencement of series production and a pipeline of finished products must be in place before launch.
  • In the case of a low‐ or medium‐volume, high‐cost product, production can be started slowly, perhaps as a pre‐production batch, and products released only to known customers, monitoring performance in the field very closely. Commitment to series production is then made when early‐stage problems have been ironed out
  • In the case of one‐off, or very small quantities, the product will go through a commissioning period when its basic functioning will be achieved followed by a period of ‘reliability growth’ as problems are overcome. In this situation, customer satisfaction comes from achieving a fully functioning and reliable product in whatever is considered a reasonable time period.

The approach taken is a function of the product's volumes and cost, as well as the established practice in the industry concerned. It is a particularly critical period in that premature release of an underdeveloped product is a guarantee of customer dissatisfaction from which recovery is difficult.

8.9 Physical Test and Laboratory Support Facilities

Physical testing obviously requires an organisation either to have, or to have access to, appropriate test facilities and the means of manufacturing prototype test parts. Their scale is very dependent on the company concerned. Large, well‐established organisations will have facilities on a scale capable of dealing with anything from small components to full‐scale products such as road vehicles, aircraft, or trains. Start‐up companies may rely on university facilities or public laboratories, although they may be able to undertake their own small‐scale rig testing.

Whatever methods or facilities are used, test work needs to be backed with a range of critical support facilities. Calibrated instrumentation and data recording systems are an essential part of physical testing. Similarly, data analysis systems play a critical role in extracting and understanding the results of tests. Metrology equipment is needed to measure components before and after test. Laboratory facilities for materials analysis are needed, which can examine any failures of the type listed earlier, literally under the microscope or using other methods, to understand failure mechanisms.

Established practice is for such facilities to carry approval and accreditation beyond the requirements of ISO9001. The specialised standard for laboratories and test houses is ISO/IEC 17025:2005 (Ref. 3) which ‘specifies the general requirements for the competence to carry out tests and/or calibrations, including sampling. It covers testing and calibration performed using standard methods, non‐standard methods, and laboratory‐developed methods’. This standard is intended to ensure that a laboratory is technically competent.

8.10 Correlation of Modelling and Testing

The methods described previously, whether in the form of calculation, modelling or testing, depend for their usefulness on their accuracy in representing real life operation. There is no easy solution to establishing this correlation other than building it up over time and thus accumulating valuable experience. The validation methods used will always be an approximation to real‐life but they will, nonetheless, still give a new product a thorough examination which will identify almost every potential defect. The opportunity also exists in every development programme to refine the correlation; for example:

  • Early calculations can be compared with later more thorough and accurate modelling
  • Models can be compared then with physical test results
  • Instrumented products can be operated in the field to gather real‐life data
  • Warranty and customer failure data can be fed back to understand why in‐service failures occurred and why they were not prevented

Consciously making correlations of this type will improve the validation process and is an important part of company learning. Connected devices (Internet of Things) will greatly expand the volume of information about the use of products in service – dealing with the sheer volume of data may be the biggest challenge.

8.11 Assessment of Serviceability

Most products have specified requirements for serviceability and repair: at one extreme, some products may be designed (rightly or wrongly) to be thrown away if failure occurs; at the other extreme, complex equipment will require regular servicing and repair throughout its life. The ability to carry out efficient servicing and repair will be built into the documented requirements for these latter products, including, for example, time requirements for specified operations. Confirmation that these objectives can be met should be built into development programmes, as with any other specified requirement.

Quite thorough reviews are now feasible at the design and modelling phase. Three‐dimensional CAD models give the ability to explore access to components and the ease of removal or replacement. Virtual reality (VR) modelling adds another level of realism, providing the ability to simulate directly maintenance and repair activities. Such assessments can then be repeated once prototype hardware is available, although this activity should be no more than a confirmation exercise if the simulation work has been thorough. This type of work is best carried out by service personnel, and the findings should be recorded as corrective actions as with any other test result.

8.12 Software Development and Validation

Most engineering products include software, which typically perform control functions and are therefore critical to the operation of the product or system. As an integral part of a system, software contributes as much as any other element of the system to its overall reliability. It does, however, perform or fail in a different way to traditional components, where the emphasis when ensuring reliability is on component and material failure. Unlike these traditional components, software does not fail in a physical way. Rather, it produces unintended outputs when certain combinations of inputs and system states apply. As many of these conditions as possible should be identified and corrected before a product reaches an end customer.

The science of this subject is now very well researched, and whole university departments devote their work to software integrity and, increasingly, security. Their work covers commercial software, such as banking and airline systems, as well as the engineering control systems of interest here. However, in this short section, only a superficial overview of the topic can be provided.

As with other forms of engineering, reliable software derives from a high‐integrity design and development process, covering specification of requirements, initial design, software coding, checking on a modular basis as design proceeds, and independent validation before ‘full‐scale’ testing on complete products. These steps are intended to prevent errors rather than having to detect them later.

The number of potential failure paths in software is a function of its complexity. Detecting failures will therefore be more effective the longer that software is run and the wider the range of conditions for those runs. When errors do occur in software, they are not always easy to diagnose, and they can be difficult to distinguish from hardware problems. However, software errors should repeat – there is not the same variability in coding as there is in manufactured components, and software does not degrade or wear out like physical components.

In exceptionally high‐integrity control systems, it is accepted that absolutely fault‐free software is unattainable and therefore some level of redundancy may be built in. One path in a multiply redundant control system may then incorporate software written by an entirely independent team to the other paths in the system. This way, the error‐generating combination of inputs and system state will be handled differently in the back‐up system.

Software validation therefore needs to be written into a product's overall validation plan, including physical testing in as wide a range of conditions and for as long a period as can be afforded.

8.13 Reliability Testing

Reliability, as a defined term, relates to the expectation that a product or system will operate as intended when called upon to do so. The reliability of existing products operating in the field is often measured objectively using field service data. Following on from this, numerical reliability targets are often set for new products expressed, for example, in terms of failures per year, or reliability percentage per mission, or mean time between failures (MTBF) – see Figure 8.4.

Abbreviation Title Description
Probability of failure Likelihood, e.g. in percentage terms, of failure over a given period or for a particular mission, e.g. 0.1%
Reliability (1 – Probability of failure), or probability of performing a function over a given period or during a particular mission, e.g. 99.9%
MTBF Mean time between failures Mean time between successive failures on a repairable product or system, e.g. 500 hours
FPHV Failures per hundred vehicles Number of failures or problems experienced over 12 months on 100 vehicles
Availability Proportion of the occasions when a product or system is capable of functioning when called upon to do so, e.g. 99.9%
PFD Probability of failure on demand For one‐shot systems, e.g. an emergency shutdown system, the probability that the system will fail to operate when called upon to do so

Figure 8.4 Common measures of reliability.

A point to bear in mind is that there are different understandings of what constitutes ‘failure’. To some, a fastener coming loose and requiring tightening is a failure. To others, the definition relates to the overall failure of a system to function. Different industries have different practices in this respect.

Specifying reliability, whatever measure is used, does, however, raise a fundamental question. Deliberately designing a product to meet a specific numerical target is just not possible; reliability performance is more of an outcome of the development process than a property of the product. It is not like such properties as mass or top speed. It could be argued that, if reliability could be specified and designed for, then the modes of failure must be known and could therefore be avoided in the first place, which is clearly not the case.

The development and validation methods described previously have the basic aim of identifying defects in the design so they can be corrected before release to the customer. The more defects that are found and corrected, the greater will be the reliability of the product when operating in the field. This statement presupposes that defects are inherent in the design, which is partly but not entirely the case. In addition, the manufacturing process is equally capable of generating defects although a good design will simplify manufacturing and thus reduce the chance of error.

The question then arises as to whether reliability can be objectively measured as part of the development programme. Clearly, this would have to be done in the latter stages of any programme when the bulk of issues had been addressed. It would also ideally use production, rather than prototype, material with all the normal variability that production processes entail.

Within these constraints, a reliability measurement programme is possible in principle. If the intended failure rate of the product is of the order of one to two defects per year (see earlier comment in Chapter 7), then generating meaningful data would require a minimum of six to eight products built to the same standard and operating for 6 months plus, unless some way of accelerating usage can be found. With relatively inexpensive items, such as domestic appliances, 20 to 30 items could feasibly be tested, giving better results from a statistical viewpoint.

A further approach would be to undertake reliability trials on individual components or subsystems. They can be run in significant quantities in test laboratory. It will not be completely representative of field operation, but useful information will be generated and results such as failure mechanisms will be generated quickly.

Fortunately, there are methods of deriving reliability estimates from very small sample sizes. A widely used method (Ref. 4) was developed originally by the Swedish engineer Waloddi Weibull, who first presented his ideas in 1939 and his definitive paper in 1951 to the American Society of Mechanical Engineers, including seven examples of where it could be used. His approach met with some controversy but has proved valuable in the field of engineering, as much as anything for its relative simplicity and clear presentation – it is based on a log‐log plot of cumulative failures against time or number of cycles. Figure 8.5 shows an example.

TypicalWeibull plot displaying a solid ascending line with triangle markers. On the right is a box labeled with the parameters Scale Factor= 14351.2, Shape Factor= 0.874, r2= 0.972, and n/s= 9/2.

Figure 8.5 Typical Weibull plot.

Technically, Weibull's method relates to one failure mode at time, rather than the mixed failure modes that would be experienced on a complex product. There are related methods (developed by J. T. Duane and Dr Larry H. Crow), again using log‐log plots, which are more accurate when considering multiple failure modes. These methods give the development engineer the ability to gain some early indication of numerical reliability and hence to judge how a product will be received in the field.

In conclusion, reliability testing does have a place in terms of understanding the maturity of a product and whether it will have adequate reliability for the marketplace. It could continue to be measured, either from data from in‐service products or by testing of production batches, as a form of quality oversight. It will, however, measure the outcome of the development process rather than being the means of achieving the target reliability.

8.14 Corrective Action Management

The purpose of all the development and validation work described above, whatever form it takes, is to generate knowledge and learning about the performance and life of the product. This is only of use if something is done with the information, which, in turn, implies a conscious process of managing the learning that has occurred. This point was made earlier in Chapter 2, where some of the principles of managing learning were made. In summary, the learning or corrective action process should record the following:

  • A description of the problem encountered or the potential improvement that could be made
  • A record or analysis of the problem and what detailed information or data are available
  • An assessment of the root cause of the problem, based on the data about it
  • A note of the potential solution (when first recording the problem)
  • A note of the corrective action planned
  • A record of the problem having actually been closed out in subsequent phases of work

This does not need to be a complicated or bureaucratic process; the main point is ensuring that all opportunities for learning are acted upon and not lost or forgotten. (Early signs of most problems arising in service can be found in earlier stages of development. And service records are another source of learning for future projects.)

8.15 Financial Validation

At this point of development, major financial commitments will either have been made or will be close to being made. Although not part of the technical validation of a new product, a detailed financial review of the project should be undertaken at this stage. This needs to be based on obvious parameters such as sales volumes, sales revenue, and product costs but also needs to include one‐off costs such as tooling, investment, development costs, launch costs, and (often forgotten) working capital and spares inventories.

The most rigorous form of analysis at this point is in the form of a discounted cash flow, using the organisation's weighted cost of capital as the discount factor. Figure 8.6 shows an example of such an analysis with cumulative cash flow (units could be $m's or £m's) plotted against time, in years.

Example of discounted cash flow analysis displaying circle markers fitted on 2 discrete ascending curves labeled GROSS (solid) and DISCOUNTED (dotted).

Figure 8.6 Example of discounted cash flow analysis.

This particular example has a period of 2 years prior to launch and then 7 years afterwards and uses an 8% discount factor, which might be considered low. Every company will have its own criteria for judging what is considered to be an adequate return and, of course, different projects could be compared using this methodology.

The key point is the compilation of a time‐based inventory of all future costs and revenues. Although the example and the text are based around a volume product, the same principles can be used with one‐off or very low volume solutions. It can also be used where ownership of the product is retained by the manufacturer and revenue arises from lease or from sale of the ‘effect’ of the product. In these instances, of course, cash flow is delayed, relative to a straight sale, so a time‐based analysis is even more important to ensure financial viability.

8.16 Concluding Points

This chapter has described the processes for ensuring that technology and product development effort delivers a robust and reliable end result. This objective is partly achieved by selecting the right concepts in the first place but is more dependent on the detailed execution of thorough development programmes. These programmes would ideally be based around an organisation's existing development and sign‐off codes, which represent that the organisation's accumulated learning and experience in this field. Programmes would also be driven by risk analysis, identifying areas of particular concern, especially where new technology, new markets, or new operating regimes are coming into play. The latter three are the areas that could be problematic as they are, by definition, outside previous experience. For start‐up companies, all experience is new and, hence, developing a new technology or product to the point where it is reliable is even more challenging.

Development and validation activities are undertaken by a combination of engineering calculation work, modelling and simulation, and physical testing, in approximately that order. The more thorough and the earlier the calculation and modelling work, the less physical testing, which is expensive, will be needed. This is better also for the project in total, as other planning work will benefit from having better‐researched early stages. However, there is no substitute for some level of physical testing. In a complex product, physical prototypes are the opportunity to see all the components of the product working together – something that is difficult to simulate. Validation and sign‐off, including regulatory approval, are normally based on physical products. Physical testing does need access to competent facilities that require extensive support facilities for instrumentation, data gathering and analysis, and failure analysis.

The activity of development is very much one of learning: understanding how a product performs and how it fails, which, in turn, will provide an understanding of what margins exist in normal operation. The work therefore firstly confirms that the product or system meets the performance expected of it in terms of its functionality, including any regulatory requirements that are relevant. This information should be understood for a range of operating regimes in terms of duty cycle and environment, for example, and not just for everyday usage.

The work secondly looks at the life of the product for which various methods of accelerated durability testing exist. The methods of compressing a number of years of life into a short period are not perfect, but they will flush out most problems.

It goes without saying that development and validation programmes need to be carefully planned, with the activities following each other in the right sequence. The learning points generated by the work need to be recorded and follow‐up confirmed.

In the later stages of development programmes, numerical measurements of reliability can be made if a statistically significant (this need not be high) number of products can be made by production methods and operated in realistic conditions. It should be noted that a product's reliability is set by the thoroughness of the development programme and is not an inherent ‘property’ of the design.

Financial validation should also be undertaken at this point and methods for doing so are described.

The points above relate mainly to products that are produced in some quantity and where there is a clear distinction between prototypes and production items. Some engineering products are produced either as one‐off's, e.g. process plants or bridges, or in very low and expensive quantities, e.g. submarines or space‐craft. In these situations, an even greater emphasis is placed on calculation and modelling work. The physical product is then the final item which must go through a period of commissioning and shake‐down. Customer satisfaction in this situation comes from engineering solutions which perform well from the outset and where the commissioning process is effective in finalising the solution.

References

There are relatively few publications relevant to this chapter.

Neville Shute's autobiography includes some interesting material from 80 or 90 years ago, including the forerunner of finite element methods, performed manually:

  1. 1 Shute, N. (1954). Slide Rule, the Autobiography of an Engineer. Norway: Heinemann.

This book provides a substantial overview of reliability engineering as applied to electronic systems:

  1. 2 Swingler, J. (2015). Reliability Characterisation of Electrical and Electronic Systems. Woodhead Publishing.

Engineering test laboratories should be set up to operate within this international standard:

  1. 3 ISO/IEC 17025:2005 General Requirements for the Competence of Testing and Calibration Laboratories

Weibull statistical methods are useful where there is a relatively small number of test samples:

  1. McCool, J.I. (2012). Using the Weibull Distribution: Reliability, Modelling and Inference. Hoboken, NJ: Wiley.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.170.63