3.1 Introduction

The receiver operating characteristics (ROC) analysis has been widely used in signal processing and communications to assess effectiveness of a sensor or detector/device for detection (Poor, 1994). In recent years, it has also become a common evaluation tool for effectiveness of a medical modality in medical diagnosis, specifically for computer-assisted diagnostic systems (Metz, 1978; Swets and Pickett, 1982), automatic target recognition (ATR) (Parker et al., 2005a, 2005b; Blasch and R. Broussard, 2000; Bauman et al., 2005), and fusion analysis (Blasch et al., 2001; Blasch and Plano, 2003; Blasch, 2008). The idea is simple. For a given detector or detection technique how can we objectively evaluate whether or not it is effective and in what sense? There are many criteria or cost functions available for such assessment, such as least-squares error, signal-to-noise ratio, and misclassification error. Unfortunately, none of these criteria can be considered as a general criterion to fit all detection problems. For example, least-squares error or signal-to-noise ratio may be a good criterion for detection problems in signal processing and communications but may not be suitable for measuring image quality or classification accuracy in image processing. So, in order to avoid using a specific criterion for performance evaluation, the ROC analysis is introduced for this purpose. It does not necessarily specify a particular criterion or cost function. Instead, it focuses on the effect of a decision made for a detection problem regardless of what specific criterion or cost function should be used. More specifically, it casts a detection problem as a binary decision problem, that is, binary hypothesis testing problem which results in four decisions that need to be considered:

1. When the ground truth of the problem is true, we made a “not true” decision. In this case, we commit an error, referred to as “miss” or “false negative” decision.
2. When the ground truth of the problem is true, we made a “true” decision. In this case, we made a correct decision, referred to as “detection” or “true positive” decision.
3. When the ground truth of the problem is not true, we made a “true” decision. In this case, we commit another type of error, referred to as “false alarm” or “false positive” decision.
4. When the ground truth of the problem is not true, we made a “not true” decision. In this case, we made another type of correct decision, referred to as “true negative” decision.

Using the above four decisions we can evaluate a given detector or a detection technique according to its effectiveness without actually appealing for a specific performance criterion or cost function. Since two error decisions contradict each other, a general practice is to choose the “false alarm,” that is, “false positive” as a base to produce the best decision. In other words, by constraining the false alarm rate to a certain level for which the considered problem can tolerate, what can a detector or detection technique achieve as the best detection power in terms of probability? The ROC analysis is developed to address this issue. For any given detector or detection technique the ROC analysis plots a curve, referred to as ROC curve, based on the probability of detection power, that is, detection rate versus false alarm rate where an ROC curve is a function of two parameters, the detection rate PD and the false alarm rate PF, and is used to evaluate how effective or good a given detector or detection technique is. For example, suppose that two detectors or detection techniques are considered for performance evaluation. To see which one performs better, we first generate and compare their ROC curves. If for any given false alarm rate the detection rate of one technique is higher than the other, we can conclude that this technique is more effective or better than the other. Since the ROC curve is always convex, we can calculate the areas under their ROC curves, called area under curve (AUC), Az instead of examining their ROC curves without actually calculating their individual pairs (false alarm rate, detection rate). As a result, the higher the value of Az is, the better the detection performance.

Despite that the ROC analysis does not use any cost function it does use PF as a cost measure to evaluate detection performance where the PF indeed involves with an implicit parameter hidden in the PF, called threshold τ, which actually determines the PF (see the Neyman–Pearson detector specified in Section 3.5). In other words, it is τ, which is a real cost parameter, that implements a detector with both PD and PF calculated as functions of τ. Unfortunately, this τ is not shown in the ROC analysis since the PF of an ROC curve is considered as an independent variable ranging from 0 to 1 rather than a variable dependent on τ. To deal with such an issue, a concept of developing a three-dimensional (3D) ROC was first envisioned by Alsing et al. (1999) who introduced 3D ROC trajectory by including a third parameter, such as probability of rejection, to assess the detector's degree of difficulty to recognize unknown targets due to the lack of confidence. Nearly at the same time, Chang et al. (2001b) also proposed a rather different concept that directly involves τ to determine PF and PD. It argued that threshold τ ultimately determines the detection performance. When a detector is not ready to make its decision due to lack of confidence in provided evidence, two approaches can resolve this dilemma. One is to reject the decision as proposed by Alsing et al. (1999) with applications to fusion analysis for user-synthetic aperture radar (SAR)-ATR systems (Plano and Blasch, 2003) rather than making a hard decision. The other is to make a soft decision based on likely cost determined by the threshold τ (i.e., in this case, the likelihood ratio test is equal to the threshold τ in (2.6) or (2.8)). Under this circumstance, the detector is forced to make a random decision according to the likelihood of each decision in terms of probability calculated by the threshold τ. The resulting detector is called a randomized detector. The work using a threshold representing the likelihood of the signal presence in Parker et al. (2005a) and the work using the confidence error as a criterion in Parker et al. (2005b) are good examples in this aspect. As a matter of fact, a random decision is better than a rejection because a randomized detector offers likelihood of each decision to be made in terms of its probability compared to the latter which simply does not make a decision by rejection. In a broader sense, a rejection can be interpreted in the context of a random decision where the probability of rejection actually describes the likelihood of a decision to be rejected. Since a great deal of research effort devoted to the 3D ROC analysis using the probability of rejection as a third parameter has been made in Alsing et al. (1999) and Plano and Blasch (2003), this chapter will focus only on the development of a 3D ROC analysis using threshold τ as the third parameter which has been investigated in hyperspectral imaging (Chang et al., 1998; Chang, 2002, 2003a), magnetic resonance imaging (Wang et al., 2003, 2005; Chen et al., 2005), chemical/biological agent detection (Chang, 2006; Liu et al., 2005), and biometrics identification (Du and Chang, 2007, 2008) where a 3D ROC curve can be plotted according to the three parameters, PD, PF, and threshold τ. This is because a detector implemented in the above-mentioned applications is actually an estimator whose estimated values represent the strength of signal detectability used to perform signal detection via a threshold τ that helps a signal estimator make a binary decision. By virtue of these three parameters, PD, PF, and τ, we can derive a 3D ROC analysis to generate a 3D ROC curve as a function of PD, PF, and τ. As a result of a 3D ROC curve, three 2D ROC curves can also be derived and plotted. One is an ROC curve of (PD,PF) which turns out to be the ROC curve by the traditional ROC analysis. The other two are new 2D ROC curves, which are the ROC curves of (PD,τ) and (PF,τ).

Since the Neyman–Pearson detection theory is mainly focused on detection of signal in noise the decision of detecting noise, that is, the fourth decision described earlier, does not make any sense. However, in other applications such as medical diagnosis the probability of making the fourth decision of true negative represents specificity of a medical modality. Besides, when signals to be detected are multiple signals, it may require multiple hypotheses to perform multisignal detection. This problem can be actually addressed by signal classification problems where different threshold values of τ are required to classify different signal classes. Unfortunately, the 2D ROC curve of (PD,PF) does not provide such information of τ. This chapter makes an attempt to address this need and explores 3D ROC analysis and its utility in four different applications.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.8.247