An LS-based approach designs an LS-based algorithm that can be first applied to the original data to extract data sample vectors characterized by second-order statistics of IBSI(S) as BKG signatures and then is applied again to the sphered data to capture data sample vectors characterized by HOS of IBSI(S) as target signatures. The task of data sphering is designed to remove the data sample mean and co-variances while making data variances unity so that data sample vectors completely characterized by second-order statistics of IBSI(S) will be forced on the sphere and all other data sample vectors that are characterized by HOS of IBSI(S) are either inside (sub-Gaussian samples) or outside the sphere (super-Gaussian samples). As a consequence, data sample vectors characterized by IBSI(S) of orders higher than 2 can be extracted from inside or outside the sphere. Interestingly, the idea of using the same algorithm applied to different data sets resulting from the same data set to be processed has never been explored until Chang et al. (2010, 2011).
In what follows, three least squares (LS)-based algorithms developed for SQ-EEAs in Chapter 8 can be used for the purpose of finding VSs directly from the data. The first algorithm is ATGP that is an orthogonal subspace projection (OSP)-based algorithm. Since the OSP is a least squares-based criterion, the ATGP can be also viewed as an unsupervised version of an unconstrained LS-based LSMA method. A second LS-based algorithm is an unsupervised version of a partially abundance-constrained least squares NCLS, unsupervised NCLS (UNCLS) as opposed to a third LS-based algorithm that is an unsupervised version of fully abundance least squares-based FCLS, unsupervised FCLS (UFCLS). When these three unsupervised LS-based algorithms are implemented, a prescribed error ε determined by various applications is required to terminate the algorithms. In general, it is done by visual inspection on a trial-and-error basis which is not practical for our purpose. Therefore, instead of using ε as a stopping rule, we use VD as an alternative stopping rule to determine how many targets are needed to be generated. This is because VD is generally found by methods regardless of applications, which do not appeal for any algorithm, such as the Harsanyi–Farrand–Chang (HFC) method, SSE/HySime in Chapter 5.
In order for the proposed LS-based algorithms to be successful, we also assume that the most BKG data sample vectors are characterized by a large number of uninteresting data sample vectors in the sample pool S that can be characterized by second-order statistics of IBSI(S) as opposed to target sample vectors that can be captured by higher order statistics of IBSI(S) due to a small number of sample vectors in S. As a result of this assumption two sets of data sample vectors can be derived from the original data. One set is the original data and the other set is the sphered data that has the mean and covariance removed from the original data for data processing. We then apply the three unsupervised LS-based algorithms to these two data sets to extract second-order BKG data sample vectors as well as high-order target data sample vectors. However, if a data sample vector shows strong signal statistics in both original and sphered data sets, it will be considered as a target sample vector and can be removed from the BKG class.
A detailed implementation of LS-based unsupervised VS finding algorithm (LS-UVSFA) can be briefly described below where the LS-based unsupervised algorithm used in LS-UVSFA can be one of the three LS unsupervised algorithms, ATGP, UNCLS, and UFCLS described above.
LS-based Unsupervised VS Finding Algorithm (LS-UVSFA)
It should be noted that when a specific LS-based algorithm is used, the superscript “LS” in the above algorithm will be replaced with this particular algorithm. For example, if ATGP is used for LS-UVSFA, it is then called ATGP-UVSFA.
3.19.75.133