5. 1. Introduction

Neural networks have played a major role in many applications (e. g., see [171, 368]). In particular, they have demonstrated convincing performance in the detection and recognition of object classes. This is mainly due to their capability to cope with a variety of cues such as texture, intensity, edge, color and motion. In the context of personal identification, neural networks can facilitate detection or recognition of high-level features extracted from facial images or speakers' voices.

Supervised-learning network represent the mainstream of development in neural networks for biometric authentication. Some examples of well-known pioneering networks include the perceptron network [321], ADALINE/MADALINE [382], and various multi-layer networks [247, 269, 326, 377]. The two phases involved in a supervised-learning network are: retrieving phase and learning phase. In the retrieving phase, the objective is to determine to which class a pattern belongs based on the winner of the output values. The output values are functions of the input values and network weights, called discriminant function. In the learning phase, the weights are trained so that the (learned and/or unlearned) patterns are more likely to be correctly classified.

This chapter presents several important variants of multi-layer networks. First, the popular and well-known back-propagation (BP) algorithms for training multilayer networks are explored. It is widely recognized that the ultimate performance of BP algorithms are known to be sensitive to initial conditions. Taking convergence behavior into consideration, the text here looks into a two-stage training strategy so that the back-propagation training of multi-layer networks will be more likely to start with a reliable initial setting. To present more comprehensive coverage of the learning algorithms of multi-layer networks, a complementary learning scheme, known as the genetic algorithm, is also presented.

Advanced network structures can be more effectively employed to integrate different training disciplines such as local experts, unsupervised modules, and supervised-learning strategies. Chapter 6 presents the concept of modular networks, mixture-of-experts, and hierarchical network structures. Both expert-based and class-based modular networks are introduced. Chapter 7 demonstrates how existing unsupervised modules (e.g., RBF or LBF clustering) and supervised-learning (e.g., reinforced/antireinforced) strategies can be naturally combined to lead to a decision-based neural network.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.235.219