Chapter 24. Test Planning

24.1. The Need for a Test Plan

An electromagnetic compatibility (EMC) test plan is a vital part of the specification of a new product. It provides the basis around which the development and preproduction stages can be structured in order for the product to achieve EMC compliance with the minimum of effort. It can act as the schedule for in-house testing or can be used as a contractual document in dealing with an external test house. Although you should prepare it as soon as the project gets underway, it will of necessity need revision as the product itself develops and especially in the light of actual test experience on prototype versions.

24.1.1. The Requirements of Accreditation

You may decide to do the testing at an accredited laboratory. The accreditation standard ISO/IEC 17025 (ISO/IEC, 2005) actively demands that an accredited laboratory has a procedure for the review of requests, tenders, and contracts, which ensures that

  • The requirements, including the methods to be used, are adequately defined, documented, and understood
  • The laboratory has the capabilities and resources to meet the requirements
  • The appropriate test and/or calibration method is selected and is capable of meeting the customer's requirements

Most such laboratories have a mechanism for dealing with this “contract review” stage by asking customers to complete a standard form with details of the equipment to be tested, its support equipment, the standards to be tested against, performance criteria and operating modes, and so forth. It is helpful if customers can anticipate this and provide full details immediately, but often it requires some interaction with and possibly advice from the lab.

As a manufacturer of products using a third-party test lab, you have to remember that although the lab is a specialist in EMC testing it does not necessarily have any knowledge of your product. The interface between the product and the test has to be deliberately created. This is a hidden but major advantage of a company's own in-house EMC test lab—it is in a far better position to create and/or review the product test plan than is a generalist lab.

24.1.2. The Requirements of Standards

24.1.2.1. Def Stan 59–41

Part 2 (Management and Planning Procedures) of the military EMC test standard (U.K. Ministry of Defence, n.d.) has an explicit requirement for a project test plan. To quote that document directly, In order to achieve consistency throughout the phases of EMC testing, it is essential to formalize the details of the test procedure for the project. The test methods described in this Standard are necessarily generalized and cannot cover the exigencies of each equipment under test. The test plan shall describe the preferred methods of interpretation of the requirements of this Standard and state in detail the equipment configuration during the test, the methods of supplying power to the equipment, the application of stimulus signals and also the application of electrical or mechanical loading.Without a formalized test plan the results of the EMC test may vary considerably due to possible variations in the test arrangement, thus obscuring the effects of any modifications during development of the equipment to the production stage.

So it is to be expected that any supply of military equipment to DEF STAN 59–41 will demand the EMC test plan as part of the contract.

24.1.2.2. IEC 61000-4-X

Clause 8 of most of the main parts of IEC 61000-4, the basic standards for immunity testing, requires that “the tests shall be carried out on the basis of a test plan that shall include the verification of the performance of the EUT.” Each of the standards naturally has its own requirements for what the test plan should contain, for instance IEC 61000-4-3 for radio frequency (RF) immunity includes

  • The size of the equipment under test (EUT)
  • Representative operating conditions of the EUT
  • Whether the EUT should be tested as tabletop or floor standing or a combination of the two
  • For floor-standing equipment, the height of the support
  • The type of test facility to be used and the position of the radiating antennas
  • The type of antennas to be used
  • The frequency range, dwell time, and frequency steps
  • The size and shape of the uniform field area
  • Whether any partial illumination is used
  • The test level to be applied
  • The type(s) and number of interconnecting wires used and the interface port (of the EUT) to which these are to be connected
  • The performance criteria which are acceptable

IEC 61000-4-5 for surge immunity has “the test plan shall specify the test set-up with”

  • Generator and other equipment utilized
  • Test level (voltage)
  • Generator source impedance
  • Polarity of the surge
  • Number of tests
  • Time between successive pulses
  • Representative operating conditions of the EUT
  • Locations to which the surges are applied
  • Actual installation conditions

So a complete product test plan has to take account of the needs of each of these basic standards.

24.1.3. The Requirements of the Customer

While the preceding headings point out that both accredited testing and testing to certain standards explicitly require a test plan to be created, in practice this is not always or even often the driving force. Much EMC testing is actually performed in the face of a requirement from a technically sophisticated customer. The automotive, aerospace, rail, and military sectors are good examples of such situations. It is quite usual for the customer to have to agree and sign off the proposed test plan before testing can commence, if it is to regard the results as valid.

24.2. Contents of the Test Plan

Given that you will be writing a test plan for your product, what areas should it cover?

24.2.1. Description of the Equipment Under Test

A test plan could cover either a single product or a range of similar products or a complete system. The basic description of the EUT must specify the model number and which (if any) variants are to be tested under this generic model type, to make clear what are the boundaries within which the EMC test results can apply. Important aspects of a general description for many of the tests are

  • Physical size and weight
  • Power supply requirements
  • Number and type of connection ports
24.2.1.1. Stand-Alone or Part of a Larger System?

If the EUT is part of a system, or can only be tested as part of a system, for instance it is a plug-in module or a computer peripheral, then the components of the system of which it is a part must also be specified. This is because these other system components could affect the outcome of the test if they are part of the test environment, and you need to take care that the test results will not be compromised by a failure on their part.

These issues commonly arise when, for instance, third-party monitors are connected to a computer-based instrument or when different design groups in the same company are responsible for individual parts in the whole system. Not only must it be clear where the responsibilities for passes, failures, and consequent remedial action in the tests are to lie, but also you may have to set up the tests so that individual contributions to emissions or immunity signatures can be identified.

24.2.1.2. System Configuration and Criteria for Choosing It

Following on from the preceding, if the EUT can form part of a system or installation that may contain a variety of other different components, you will need to specify a representative system configuration that will allow you to perform the tests. The criteria on which the choice of configuration is based, that is, how you decide what is “representative,” must be made clear.

The wording of the second edition of the EMC Directive, in particular, places a burden on the manufacturer to declare compliance of the product “in all the possible configurations identified by the manufacturer as representative of its intended use.” It would be typical to address this by building an EUT that included at least one of every hardware function that might be available, so that for instance the top-of-the-range model that incorporated every option would represent those lesser models that did not have some options. If certain options are mutually exclusive then this is likely to mean testing more than one build. An example of a hypothetical “compliance test matrix analysis” for a display-based product, not necessarily definitive, might be as shown in Table 24.1.

Table 24.1. Example test matrix
Product Processor Interfaces Display modes Display size Cover and attachments
XV200 100 MHz RS485 800 × 600 12" Plastic, side bracket
XV250 100 MHz RS485, CAN 800 × 600, 1024 × 768 12" Plastic, side bracket
XV300 100 MHz RS485, CAN, Ethernet 800 × 600, 1024 × 768 14" Plastic, side bracket
XV301 Plastic, top bracket
XV400 133 MHz RS485, CAN, Ethernet 800 × 600, 1024 × 768, 1280 × 1024 17" Metal, side bracket
XV401 Metal, top bracket

Testing choices:

XV300 tested in 800 x 600 with all interfaces connected will represent XV200, 250, and 301 (same processor, largest size)

XV400 tested in 1024 x 768 and 1280 x 1024 will represent XV401 (identical except for fittings)

24.2.1.3. Revision State and Acceptable Revision Changes

During design and development you will want to perform some confidence tests. At this stage the build state must be carefully defined, by reference to drawing status, even if revision levels have not yet been issued. Once the equipment reaches the stage of compliance testing or customer qualification, the tests must be done on a specimen that is certified as being built to a specific revision level, which should be the same as that which is placed on the market.

Essentially, you must choose the test timing such that you can be reasonably sure that any subsequent changes will not invalidate the test results. But the test plan should define the exact build state of the equipment that will be presented for testing and if possible what changes to either hardware or software can be permitted, and the following test report should document any changes that have actually been applied during the tests, usually to make the product pass. Defining the exact build can itself be nontrivial, especially for large systems with many components.

24.2.2. Statement of Test Objectives

Self-evidently, you will need to state why the tests are to be performed and the type and detail of report to be issued: ranging from a simple certificate, through a report of tests done without results or details, to a full report including setups, result values, and other relevant information. For manufacturer's self-certification to European Union directives, the form and level of detail of the report are not specified and are set by the manufacturer's own requirements.

The time and cost involved in preparing a report obviously will reflect its complexity. Accredited reports are required to contain at least a minimum set of information and the results are to be reported “accurately, clearly, unambiguously and objectively, and in accordance with any specific instructions in the test or calibration methods” (ISO/IEC, 2005)—the last clearly relating to the requirements of the basic standards of IEC 61000-4, whose Section 10 lays down the content of the report “as necessary to reproduce the test”—but nonaccredited reports may turn up in any format and with varying levels of detail. Possible objectives, for recording in your test plan, could be

  • To meet legislation (EMC Directive, FCC certification), as will be necessary to be legally permitted to place the equipment on the market
  • To meet voluntary standards, to improve your competitive advantage
  • To meet a contractual obligation, because EMC performance has been written in to the procurement contract for the equipment or its host system

24.2.3. The Tests to Be Performed

24.2.3.1. Frequency Ranges and Voltage Levels to Be Covered

These are normally specified in the standard(s) you have chosen. If you are not using standards, or are extending or reducing the range of the tests, this must be made clear. Even if you are using standards, there may be options for applied levels that must be explicitly chosen; or, for instance, the choice of Class A or Class B emissions limits depends on the EUT's application environment, and you must state how the chosen limits relate to this environment. The test plan should be specific about these parameters and, if one of a range of options has been chosen, should give the rationale for this.

24.2.3.2. Test Equipment and Facility to Be Used

In theory, this will also be determined by the standard(s) in use. Most standards have specific requirements for test equipment, such as CISPR 16-1-1 instrumentation for emissions measurements and various generators for the IEC 61000-4-X transient tests. If you will be using an external test house it will determine the instrumentation that it will need to use to cover the required tests. If you are doing it in-house, it is your responsibility.

For some tests, the situation is not so simple. Radiated emissions may be done on an open-area test site (OATS) or in a semi-anechoic screened room, or possibly in a GTEM or FAR (see Sections 22.2.4 and 22.3.1 for more discussion of these options), and the test distance may also be optional. The test facility actually used will probably depend on what is available, but also on other factors such as the size of the EUT. For radiated immunity, again there may be different facilities available and again the EUT dimensions play an important part in what is chosen. For conducted RF immunity and emissions on signal lines, you may have a choice of transducers. The test plan needs to pin these variables down, since the standards by themselves do not.

24.2.3.3. Choice of Tested Ports

The number of ports—a definition which includes the “enclosure port” as well as cable connectors—to be tested directly influences the test time. The standard(s) you choose to apply may define which lines should be tested, for instance the mains lead for conducted emissions. In some cases you can test just one representative line and claim that it covers all others of the same type.

It is important that both you and any subsequent assessment authority know why you have chosen either to apply or to omit tests to particular ports on the EUT. A decision not to test emissions or immunity of certain connected signal or I/O leads may rest on an agreed restriction of the allowable cable length that may be connected to the ports in question—for instance the generic standards apply fast transient and conducted RF immunity tests to signal ports on this basis. The use of a V-network or voltage probe rather than a LISN for supply line emissions measurement may have been due to insufficient current rating of the available coupling network.

The physical position of the test point can be critical, especially for electrostatic discharge application, and must be specified. The choice of ESD application points should be supported by an assessment of likely use of the equipment—which parts are accessible to the user—and/or some preliminary testing to determine weak points.

24.2.3.4. Number and Sequence of Tests and Operating Modes

The order in which tests are applied and the sequence of operating modes may or may not be critical, but should be specified. The results of one test may unintentionally set up conditions for the next, or the EUT could be damaged by inadvertent overtesting, especially with a large number of high-energy surges in a short time—you may want to leave these till last as a precaution. An alternative and opposite argument is that surge testing could damage filter components, leaving the product with high conducted emissions but still working apparently correctly, so that the potentially damaging tests should be applied first, or at least the conducted tests should be repeated after the surge. Make your own decision!

24.2.4. EUT Exercising Software and Ancillary Equipment or Simulators

24.2.4.1. EUT Operating Modes

Simple EUTs may have just one mode of operation, but any complex device will have several different operating modes. If you are lucky, you may be able to identify a worst-case mode that includes the majority of operating scenarios and emission/susceptibility profiles. This will probably need some exploratory testing and may also need software patches to repeatedly exercise a particular mode. The choice and number of modes has a direct influence on the testing time.

The rate at which a disturbance is applied or an emission measurement is made should depend on the cycle time of the specified operating mode. The test has to dwell on each test frequency for long enough to pick out the most susceptible response or the highest emission. If the EUT might exhibit low duty cycle frequency selective peaks in response, then the test dwell time should be extended to encompass the whole cycle—which, if it is more than a few seconds, can make the total test duration impractical or at least expensive. Sometimes, variations in test results on the same EUT between laboratories can be traced to differences in this parameter.

The EUT may benefit from special software to fully exercise its operating modes; if it is not stand-alone, it will need some ancillary support equipment. Both of these should be calibrated or declared fit for purpose.

24.2.4.2. Emissions Exercising

It may for example be necessary to set up a particular display pattern to give the worst case emissions: In fact, different display modes (VGA, SVGA, etc.) have different pixel clocks and all of these should be exercised, either with a “representative” picture or with a worst-case pattern. A black-white vertical bar pattern alternating on each horizontal pixel will give a video signal with the highest amplitude of emission at half the pixel clock frequency. You will normally find that any real picture will have its video RF energy distributed over a wider frequency range and therefore have a lower amplitude at any particular frequency, but this does not take into account possible resonances in coupling which can enhance certain frequencies, so it is preferable to test both worst-case and representative pictures.

Communications interfaces are normally configured to continually send full packets of data; if they can be set to different data rates then these should all be checked to find the worst case. Memory interfaces such as hard drives, CD-ROMs, or flash cards should be exercised by continuous read/write cycles. Variable-speed motor drives or similar devices such as light dimmers will need to be evaluated for worst case emissions at different settings. Typically, maximum emissions occur at half power but this is not universal.

Once you start to explore the possible combinations in a typical complex product, it quickly becomes apparent that you could spend a very long time trying to find the worst case, and usually you have to find an acceptable compromise that adequately fulfills the tests but minimizes their expense and duration. It is often for this reason that test results vary between labs, and the need for a definite and detailed test plan becomes obvious.

24.2.4.3. Immunity Exercising and Monitoring

Similar issues for exercising apply for immunity tests, with the added complication that the ancillary equipment is often used to monitor for failures against the performance criteria (see Section 24.3). This may be for instance checking the bit error rate of a data communications link or measuring the breakthrough of 1-kHz modulation on an audio output. Power supplies or instrumentation could be monitored by logging their output voltage on a digital voltmeter. In all these cases the measuring device that checks performance must be unaffected by the disturbance stress (RF or transient) that is being applied to the EUT; and this means either isolating the support and monitoring equipment from the test or being sure that its EMC performance is better than the equipment being tested.

24.2.4.4. Isolating the Support Equipment

If the support equipment will be housed in a separate chamber during the test, its cables can be interfaced via a filtered bulkhead which will reduce fortuitous emissions and isolate it from disturbances applied to the EUT (Figure 24.1). This filtering arrangement will need to be specified to ensure that, while isolating the support equipment from the test, it does not affect the passage of the wanted signals, and for unscreened cables this requirement can frequently restrict the application of this method. Screened cables are easy to deal with as long as the screen is bonded to the wall of the chamber as it passes through the bulkhead.

Figure 24.1. Exercising and ancillary support equipment.

If the support equipment is not isolated from the test, filtering is not needed, but the support equipment's own EMC performance must then be sufficiently well specified so that it does not have a bearing on the test outcome. A compromise solution is to use ferrite clamps on the cables between the EUT and the support equipment.

Some military/aerospace tests require that the unit is tested in conjunction with the other items to which it will be connected in the final installation. Besides not affecting the outcome of the test, these items should also offer the appropriate RF terminating impedance to the connected cables. Thus the description of the ancillary equipment setup must be detailed enough to allow the RF aspects to be reproduced.

24.2.5. Requirements of the Test Facility

Some types of electronic or electrical product make special demands on the services provided by the test house, which must be clearly understood before you start testing as some test houses will be unable to provide them. Requirements may include for instance

  • Environmental conditions—special requirements for temperature, humidity, or vibration
  • Safety precautions needed—such as, if the EUT uses ionizing radiations, hazardous gases, or extra high voltages; is dangerously heavy or hot; or if the tests require high values of radiated field
  • Special handling and functional support equipment—fork lift trucks, large turntables, hydraulic or air/water cooling services; extreme examples include a rolling road dynamometer for whole vehicle testing or exhaust gas extraction for jet aircraft
  • Power sources—AC or DC voltage, current, frequency, single- or three-phase, VA rating and surge current requirement, total number of power lines (remember that FCC certification testing will need U.S.-standard power supplies);
  • System command software—will the tests require special software to be written to integrate the test suite with the EUT operation?
  • Security classification—relevant for some government or military projects

24.2.6. Details of the Test Setup

24.2.6.1. Physical Location and Layout of the EUT and Test Transducers

This is defined in general terms in the various standards—see for instance the setup diagrams in CISPR 22—but you will usually have to interpret the instructions given in these to apply to your particular EUT. (CISPR 22 assumes that your EUT will normally be a computer, a monitor, a keyboard, a mouse, and a peripheral!) The purpose of the standard setup is to attempt to maximize repeatability between test labs. Critical points for HF tests are distances, orientation, and proximity to other objects, especially the ground plane, as this determines stray capacitance coupling which can influence the results, and cable lengths to impedance stabilizing networks or coupling networks for conducted tests. An early decision has to be made as to whether the EUT is to be tested as tabletop equipment or floor standing—most standards give different layouts for each. This is fine if your product is clearly intended for one or the other, but if it may be for example handheld or wall mounted you have to apply a degree of interpretation. There are many situations for which there is no “right” answer, and you have to be able to justify your decision and record the actual setup used in detail, so that it can be replicated.

The final test report should include photographs that record the setup as well as sketches showing relevant distances. Before the test, the laboratory will need to know the general arrangement that they are expected to implement, which should relate to representative conditions of actual use as far as possible while still being consistent with the standard.

24.2.6.2. Electrical Interconnections

Cable layout and routing has a critical effect at high frequencies and must be closely defined. A cable that is run close to the ground plane and in the opposite orientation to the measuring antenna will radiate far less than one which is suspended in free space and aligned with the antenna. Types of connector and cable to the EUT should be specified, if they would otherwise go by default, as the termination affects the coupling of interference currents between the EUT and the cable. The termination at the far end affects the radiating impedance presented to the EUT and must also be specified.

This then raises the issue of what is the correct length of cable to use, and how should it be terminated. It is vital to use a type of cable and connector that will actually be used in the real situation, especially if compliance depends on cable screening, as otherwise the test will not be representative, however closely the EUT itself represents the production model. If you just pick up and use any old cable that happens to be lying around, there is no guaranteeing what results you might get.

The cable length can be equally contentious. If you will be supplying a cable of a particular length with the equipment when it is marketed, by all means use that, with excess length bundled as per any instructions in the standard. But there are many circumstances where you have no control over the actual length of cable that will be used in reality, and it may vary enormously from one application to another. In this case, and if the standard gives no guidance, all that you can do is choose a length that is practical for the test setup and that could be used in reality, and document it in the test plan and the test report.

Finally, the electrical loading of the EUT can be important. If it is a power supply, then the actual power that is drawn can affect its EMC profile, and it would be typical to test it at least at its rated load; but also the performance could get worse when the supply is lightly loaded (this effect has been documented with some television sets, which may have worse emissions in standby than when the TV is operating). How the load is referred to the ground reference of the test must also be defined. A simple resistor on a power supply output, floating with respect to the ground plane, can create a completely different RF impedance compared to a realistic piece of equipment which, even if it is not directly earthed, has a much greater stray capacitance to ground.

24.2.7. How to Evaluate Test Results

24.2.7.1. Acceptance Margins

The main results of the tests will be the levels of EUT emissions or the levels of disturbance at which susceptibility effects occur. There needs to be an accepted way of deriving safety margins between these levels and specification limits, determined by known measurement uncertainties and the likely variations between production units and the EUT. These margins are essential to an interpretation of the test results. The CISPR 80/80 rule can take care of the second of these if you are prepared to test several samples, but if you are only testing one instance of the product then you need to derive a margin essentially by “experienced guesswork.” A typical margin that might serve as a starting point would be between 4 and 6 dB below the limit, with the option to reduce this if experience shows that products are more uniform—or vice versa, to increase it if they are not.

With respect to measurement uncertainty, Section 23.5 discusses emissions uncertainty and Table 23.2 indicates the metrologically sound way of reporting pass or fail, but if you are after a compliance declaration, then a qualified statement that cannot be quoted with 95% confidence (cases B or C in Table 23.2) is not of much use.

24.2.7.2. Application of Measurement Uncertainty to Emissions

The publication of CISPR 16-4-2 has provided a formal means of dealing with this problem for the three most common emissions tests, that is, mains conducted, disturbance power, and radiated electric field. The publication gives explicit values for an acceptable uncertainty for each of these (UCISPR, see Table 23.3), and if the test house uncertainty is less than this, then (CISPR, 2003)

  • Compliance is deemed to occur if no measured disturbance exceeds the disturbance limit
  • Noncompliance is deemed to occur if any measured disturbance exceeds the disturbance limit

In other words, no account is taken of the actual uncertainty. If the test house uncertainty is greater than UCISPR, then the measured value must be increased by the difference between the two before being compared to the limit, so there is an incentive not to use test houses with large declared uncertainties.

The test plan, therefore, can reference CISPR 16-4-2 as a means of eliminating the application of uncertainty for these tests, although eventually we can expect that relevant CISPR documents will do so anyway. For tests for which there is no value given for UCISPR, then another way must be stated of applying the uncertainty. It would be ideal if standards themselves specified how to do this, and indeed standards development is moving in this direction; but meanwhile, the alternative is best illustrated by quoting UKAS LAB 34: In such cases it may be appropriate for the user to make a judgement of compliance, based on whether the result is within the specified limits with no account taken of the uncertainty. This is often referred to as “shared risk”, since the end-user takes some of the risk that the product may not meet the specification. In this case there is an implicit assumption that the magnitude of the uncertainty is acceptable and it is important that the laboratory should be in a position to determine and report the uncertainty. The shared risk scenario is normally only applicable when both the laboratory's client and the end user of the equipment are party to the decision. It would not normally apply to regulatory compliance testing unless expressly referenced by the appropriate regulatory or standards making bodies. Even in these cases the acceptable measurement uncertainty should be stated and the laboratory should demonstrate that its uncertainty meets the specified allowance … (UKAS, 2002, para 4.3)

24.2.7.3. Application of Measurement Uncertainty to Immunity

There is no equivalent of CISPR 16-4-2 for immunity tests. Uncertainty can be quoted for the applied stress in RF immunity tests, but it cannot realistically be quoted for the applied stress in transient tests since the transient generator specification combines amplitude and time quantities, and it cannot generally be calculated for the measurement of the EUT's response, which is likely to be highly nonlinear (Williams and Baker, 2002). LAB 34 says: “In the case of immunity testing against a specified interference level, e.g. a radiated field strength, it is recommended that, in the absence of other guidance, the test is performed at the specified immunity level increased by the standard uncertainty multiplied by a factor k of 1.64” pointing out that this would give a confidence level in the value of the actual stress of 90%, which in turn means that there is a confidence of 95% that at least the required specification level has been applied. Thus if the lab's standard uncertainty was 1.5 V/m on a test level of 10 V/m, it would set the actual applied stress to be 12.46 V/m, and this would ensure to a 95% confidence that 10 V/m had been achieved. However, the catch is that for RF immunity testing to EN 61000-4-3 and -6 there is “other guidance,” CENELEC has produced “interpretation sheets” for these two standards which state that “The test field strengths are to be applied as stated in Tables 1 and 2, or as defined in the product standard, without any increase to take into account uncertainties in the calibration of the field” for EN 61000-4-3, and “The test levels are to be applied as stated in Table 1, or as defined in the product standard, without any increase to take into account uncertainties in the setting of the output level at the EUT port of the coupling device. The test generator shall be adjusted to produce the nominal value of Umr as defined in 6.4.1 of the standard” for EN 61000-4-6. One can only assume that the relevant standard committee understood that this was equivalent to requiring a confidence level of just 50% that the proper stress level had been applied, and this was their intention, but since they have published the documents and the ENs have now been amended, you are at liberty to take this approach for RF immunity testing if you are referencing EN 61000-4–3 or -6. It would be advisable for the test plan to state whether the test level is to be increased by (1.64 · standard uncertainty), or not. Note that there is no equivalent interpretation for the international standards IEC 61000–4–3 or −6, although amendments regarding measurement uncertainty are in train for them.

24.3. Immunity Performance Criteria

When you perform immunity testing, it is essential to be able to judge whether the EUT has in fact passed or failed the test. This in turn demands a statement of the minimum acceptable performance that the EUT must maintain during and after testing. Such a statement can only be related to the EUT's own functional operating specification.

24.3.1. The Generic Criteria

The variety and diversity of equipment and systems makes it difficult to lay down general criteria for evaluating the effects of interference on electronic products. Nevertheless, the test results can be classified on the basis of operating conditions and the functional specifications of the EUT according to the criteria discussed next. To provide a basis for the statement of acceptable performance, the generic immunity standards contain a set of guidelines for the criteria against which the operation of the EUT can be judged and that are used to formulate the acceptance criteria for a given EUT against specific tests:

  • Performance criterion A The apparatus should continue to operate as intended. No degradation of performance or loss of function is allowed below a performance level specified by the manufacturer, when the apparatus is used as intended. In some cases the performance level may be replaced by a permissible loss of performance. If the minimum performance level or the permissible performance loss is not specified by the manufacturer then either of these may be derived from the product description and documentation (including leaflets and advertising) and what the user may reasonably expect from the apparatus if used as intended.
    • This criterion applies to phenomena that are normally continuously present, such as RF interference.
  • Performance criterion B The apparatus should continue to operate as intended after the test. No degradation of performance or loss of function is allowed below a performance level specified by the manufacturer, when the apparatus is used as intended. During the test, degradation of performance is however allowed. No change of actual operating state or stored data is allowed. If the minimum performance level or the permissible performance loss is not specified by the manufacturer then either of these may be derived from the product description and documentation (including leaflets and advertising) and what the user may reasonably expect from the apparatus if used as intended.
    • This applies to transient phenomena.
  • Performance criterion C Temporary loss of function is allowed, provided the loss of function is self-recoverable or can be restored by the operation of the controls.
    • This applies to long-duration mains interruption.

24.3.2. Interpreting the Generic Criteria

It is up to the manufacturer to specify the limits that define “degradation or loss of function” and what is a “permissible performance loss” and to decide which of these criteria should be applied to each test. Such specifications may be prompted by preliminary testing or by known customer requirements. In any case it is important that they are laid out in the final EMC test plan for the equipment, since at the very least you need to know how to monitor the EUT during the tests. If the equipment is being supplied to a customer on a one-to-one contractual basis then clearly there is room for mutual agreement and negotiation on acceptance criteria, but this is not possible for products placed on the mass market, which have only to meet the essential requirements of the EMC or R&TTE Directives. In these cases, you have to look to the immunity standards for general guidance.

An example could be, if a measuring instrument has a quoted accuracy of 1% under normal conditions, it would be reasonable to expect this accuracy to be maintained when subject to RF interference at the level specified in the standard, unless your operating manual and sales literature specifies a lower accuracy under such conditions. It may lose accuracy when transients are applied, but must recover it afterward. A processor-based product may exhibit distortion or “snow” on the image displayed on its video monitor under transient interference, but it must not crash nor suffer corruption of data.

The second edition of the EMC Directive has made little change in the essential requirement for immunity except to require that the apparatus should “operate without unacceptable degradation of its intended use” in the presence of electromagnetic disturbance (the word unacceptable is new). In the same document emphasis is also placed on the information provided with the product, particularly that which is necessary to enable it to be used “in accordance with its intended purpose.” In other words, the performance criteria may be tailored to a compromise that restricts the definition of intended use and unacceptable degradation.

Of course, what your customers think of these issues will also be important to you. The generic criteria make clear that, in the absence of any other information, what the user may “reasonably expect” is the defining point. The consequence of this is that the EMC test plan is of importance to the marketing department as well as to engineering; it is not acceptable for marketing to shrug off all responsibility for EMC compliance. Though the department may not have a technical input, it must be prepared to sign off on the performance criteria as well as other aspects, such as choice of cables, operating modes, functional configuration, and so on.

Product-Specific Criteria

Some product-specific immunity standards can be more precise in their definition of acceptable performance levels. For example EN 55020, applying to broadcast receivers, specifies a wanted-to-unwanted audio signal ratio for sound interference and a just perceptible picture degradation for vision interference. Even this relatively tight definition may be open to interpretation. Another example is EN 55024 for IT equipment, which gives particular criteria for the immunity of telephone audio links, fax machines, and VDUs. Telecommunications equipment is covered by various ETSI standards; typically it might be required to comply with a defined criterion for bit error rate and loss of frame alignment.

The subjectivity of immunity performance criteria can still present a major headache in the legal context of the Directives. It will undoubtedly be open to many manufacturers to argue if challenged, not only that differing test procedures and layouts have been used in judging compliance of the same product, but that differing criteria for failure have also been applied. It will be in their own interest to be clear and precise in their supporting documentation as to what they believe the acceptance criteria are and to ensure that these are in line as far as possible with either the generic guidelines or those given in applicable product standards.

References

CISPR, 2003 CISPR CISPR 16–4–2:2003. Specification for Radio Disturbance and Immunity Measuring Apparatus and Methods: Part 4–2: Uncertainties, Statistics and Limit Modelling—Uncertainty in EMC Measurements 2003

ISO/IEC, 2005 ISO/IEC ISO/IEC 17025:2005. General Requirements for the Competence of Testing and Calibration Laboratories 2005

UKAS, 2002 UKAS The Expression of Uncertainty in EMC Testing Publication LAB 34, Feltham, UK Edition 1 2002 available at www.ukas.com

U.K. Ministry of Defence U.K. Ministry of Defence. (no date) Electromagnetic Compatibility. Defence Standard 59–41, available at www.dstan.mod.uk.

Williams and Baker, 2002 T. Williams, S. Baker, Schaffner EMC Systems Ltd Uncertainties of Immunity Measurements. Elmac Services, DTI-NMSPU project R2.2b1 2002 available at www.elmac.co.uk/papers01.htm

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.88.110