10.1 Introduction

In Chapters 7 and 08, we have witnessed the inconsistency in final results produced by an EEA using random initial conditions. This dilemma can be resolved by ID-EEAs developed in Chapter 9, which use a set of specific initial endmembers generated by a custom-designed initialization algorithm. This chapter investigates an idea completely opposite to that used in ID-EEAs. It makes the disadvantage of using random initial endmembers an advantage for an EEA. Its idea originates from the concept of a random variable where realizations resulting from physical experiments conducted for a random variable constitute an ensemble of outcomes produced by the random variable. The following example will shed light on how an EEA using random initial conditions can be considered as a random algorithm.

Now consider a problem of estimating the bias of a coin, which is defined as the probability of a head turned up, θ. Then a process of flipping the coin in a fixed N trials (i.e., N times) can be considered a random algorithm where the randomness is caused by the uncertainty with the bias specified by the parameter θ. In order to estimate the value of θ, we flip the coin N times, which produces N outcomes, each of which is either a head or a tail. So, a single run resulting from such a N-coin flipping experiment produces a realization which consists of N outcomes. Then the parameter θ can be estimated from a realization by an estimator img where img represents N outcomes resulting from N coin flips with yi being either a head or a tail and img is the number of heads turned up in the occurrence of N outcomes, img, specified by y. Let the number of runs implemented by the random algorithm be denoted by n and the random algorithm be represented by the estimator img; then img indicates the nth run by the random algorithm img. As a result, the average of a total of n runs by img converges to θ, that is, img as img.

In light of the above interpretation, we can view an EEA that uses random initial endmembers as a random endmember extraction algorithm (REEA) where a single run is defined by implementing an REEA using one set of random initial endmembers and the result produced by a single run is considered as one realization. In this case, all realizations constitute an ensemble of final endmembers extracted by a REEA using all possible sets of random initial endmembers. According to the asymptotic equipartition property theorem in information theory (Cover and Thomas, 1991), a realization that contains final endmembers can be considered a typical realization so that the probabilities of such typical realizations will approximately be equally likely. This suggests that the average of a total number of n runs can be interpreted as an intersection of all n runs. The underlying assumption is that if a certain piece of information carried by a random variable is crucial, this information will be preserved in its realizations. Keeping this in mind, if the information of endmembers is considered as important information, the endmembers should appear in the final selected set of endmembers in realizations regardless of what initial endmembers are used for algorithm initialization. In other words, if an EEA is implemented repeatedly with different sets of random initial endmembers, all the resulting realizations shall eventually converge to a common set, which should be the desired set of the true endmembers. That is, the intersection of the final endmembers generated by all different runs should include true endmembers in the set. Using this criterion, an REEA is terminated when the final sets of endmembers produced by consecutive runs converge to the same set of endmembers in which case an REEA automatically finds endmembers without knowing the value of p a priori, a requirement for most EEAs reported in the literature. But this price is also traded for high computational complexity, which is a result of running an EEA multiple times. However, given that it is nearly impossible to know an exact number of endmembers in real-world problems, this paid-off may be worthwhile.

This chapter extends the two most popular SM-EEAs, pixel purity index (PPI) and N-FINDR, in Chapter 7, along with their two sequential counterparts, VCA and SGA, in Chapter 8, to their random algorithm counterparts referred to as random PPI (RPPI), random N-FINDR (RN-FINDR), random VCA (RVCA), and random SGA (RSGA), respectively. Using a similar approach, all the statistics-based and linear-spectral-unmixing-based EEAs can also be extended to their random versions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.23.127.197