Appendix A0
Nomenclature

In this book, we frequently talk about the ultimate limits of sensitivity of a specific instrument or measuring method. Terms like sensitivity, quantity of response, ultimate limit of performance, etc. are not devoid of ambiguity. Indeed, in electro-optical instrumentation, we use results coming from different disciplines, like optics, electronics, and measurement science, where terms are not always intended in the same way. For the sake of clarity, in this appendix we will define a few terms used in the text.

A0.1 Responsivity and Sensitivity

In a classical paper [1], Jones considered the response of a generic sensor (actually, it was a photodetector, but the concepts had a more general applicability) and introduced the following quantities:

- The responsivity R, ratio of the output signal (let’s say, a voltage signal Vu) to the input physical quantity M (or measurand) R = Vu/M. Clearly, this is not a sensitivity limit, because using an amplifier we can increase it at will. However, the responsivity is an interesting quantity because it supplies the scale factor of the conversion performed by the sensor.

- The NEI, or noise-equivalent-input, of the sensor. This is given by the output noise, usually taken as the rms value vn of the fluctuation found at the sensor output, divided by the responsivity, or NEI= vn/R.

- The dynamic range DR, which is defined as the ratio of the maximum signal Vmax supplied at the output before the sensor saturates (or its response becomes appreciably distorted) to the noise seen at the output, that is DR= Vmax/vn.

- The detectivity D*, a figure of merit of sensor’s ability to detect small signals [2]. With these definitions, the term sensitivity is no longer necessary. We think it helps to alleviate ambiguity because some researchers understand it as the NEI, whereas especially in electronic measurements [3,4], sensitivity is used with the meaning of responsivity.

A0.2 Uncertainty and Resolution

The uncertainty of the result of a measurement generally consists of several contributions, which have been traditionally classified as accidental (or random) and systematic. Recently, however, the NIST and other International Committees have adopted [5] the more correct definition of uncertainty as Type A (those evaluated by statistical methods) and Type B (those evaluated by other methods). The correspondence between the old and new classifications is not one-to-one, as is explained by the example below.

In the old classification, uncertainty is systematic when the deviation from the true value is repeatedly the same in successive measurements. Incorrect calibration and bias or offset of measurement are systematic. Systematic uncertainty cannot be reduced by averaging. To remove it, we must introduce data correction or act on the measurement method.

Uncertainty is accidental when the deviation from the true value is randomly varying from measurement to measurement. Because of the random nature, this uncertainty can be reduced by averaging the results of several successive measurements.

Usually, measurements affected by accidental effects obey Gaussian statistics and their rms deviation decreases as image with the number of measurements N being averaged.

In the old classification, accuracy was the accidental uncertainty of an instrument, whereas precision referred to the systematic uncertainty. Either of the two may prevail in a sensor.

One last quantity of interest is resolution. Resolution is defined as the minimum increment of response that can be perceived by a sensor. If the sensor has a digital readout, resolution is just one unit of the least significant digit (LSD) and represents the effect of the truncation (or round off error). Truncation is a Type B effect if the measurement randomness is much less than 1-LSD, whereas it is Type A if the randomness is much larger than 1-LSD (or, a type A effect is added). In this case, we can improve resolution by averaging.

Example. Let a 99.4-mm stick be measured with a sensor with 1-mm resolution. If the accidental uncertainty is ea<0.1 mm, every measurement will be rounded to the same result, 99 mm. But, if ea=1 mm, we will get different outcomes from the measurement, for example, 99, 100, 99, 99, 101, etc. By averaging, we will get a result approaching the true 99.4 at increasing N.

References

[1] R.C. Jones, “Performances of Detectors for Visible and Infrared Radiation”, in Advances in Electronics and Electron Physics, vol.4, pp.2-88, Academic Press: New York, 1952.

[2] S. Donati, “Photodetectors”, Prentice Hall: Upper Saddle River, NY, 2000, Ch.3.

[3] H. K.P. Neubert, “Instrument Transducers”, 2nd ed., Oxford Univ. Press: London 1976.

[4] “ISO Intl. Vocabulary of Basic and General Terms in Metrology”, ISO: Geneva 1993.

[5] B.N. Taylor and C.E. Kuyatt, “Guidelines for Evaluating and Expressing the Uncertainty of NIST Mesurement Results”, NIST Technical Note 1297, 1994.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.59.95.150