Chapter 11

A Dynamic Learning-based Approach to the
Surveillance and Monitoring of Steam
Generators in Prototype Fast Reactors 1

,

 

 

 

This research focuses on the surveillance and monitoring of evolving systems [ANG 04, LUG 11b, KAS 07, ANG 10, LUG 11a] using learning methods and dynamic classification. Like any evolving system, it changes from one mode to another suddenly (a jump) or progressively (a drift) over time. This evolution is the result of changes in the system due to a leak, damage to equipment or an adjustment, etc. When a static pattern-recognition method is used to construct models of classes for an evolving system, it allows us to classify new observations by comparing them to existing ones. It does not, however, take into account new characteristic information that is used to update models of classes (membership functions). As a result, a static classification system is not very well suited to representing the current characteristics of an evolving system. It is for this reason that this chapter applies the method that we propose for a steam generator in a prototype fast reactor. This method, based on the fuzzy K-nearest neighbors (FKNN) method [KEL 85] is semi-supervised and is called a semi-supervised dynamic fuzzy K-nearest neighbors (SS-DFKNN) method. This allows us to consider new information about an evolving system, detect unknown classes and adapt their characteristics. The SS-DFKNN method was developed to detect and monitor the evolution of dynamic classes online, adapt the latter and anticipate the current characteristics of a system.

11.1. Introduction

The monitoring and supervising of evolving systems requires continuous learning over time to account for evolutions and changes in their environment. Incremental learning methods are an effective solution for carrying out continuous learning. These methods allow the integration of information online and improve the estimation of class models. However, these methods adapt class models without challenging previous findings. They consider all patterns to be representative of classes. Indeed, some of these patterns become obsolete and are not designed to update class models. This therefore requires the use of a mechanism that allows us to ignore obsolete patterns and only keep patterns that are characteristic of classes after evolution. These methods are called dynamic learning and classification methods. The challenge with these methods is to select patterns that are representative of changes in class and avoid a catastrophic loss.

In the literature, class models are adapted by directly acting on the classifier's parameters by substituting or adding certain recent patterns that are representative of the learning set according to the state (stable, slow or rapid change) of the system [ANG 04, ANG 00, NAK 97]. This adaptation uses a sliding window, a selection criterion or a forgetting factor.

A sliding window, of either fixed or variable size, allows us to reduce or limit the increasing size of a database by accepting only the most recent n patterns [NAK 97]. The size of the sliding window must be well selected to obtain a compromise between a rapid adaptation and a sufficient number of representative patterns.

A selection criterion allows us to select patterns according to their age and usefulness [GIB 94]. The pattern's age cannot be the only selection criterion, since some patterns can be nonsensical or correspond to noise, etc. The usefulness of a pattern can be defined by its corresponding change in the system. For example, if one of the system's parameters evolves significantly following a change that is characteristic of the system, the displacement carried out by the patterns for this parameter corresponds to the significance of the evolution. Nevertheless, it is difficult to estimate the usefulness of patterns.

A forgetting factor is calculated on the patterns to detect those that are less characteristic of the system's functioning. This forgetting factor can occur at a constant or variable rate so that each pattern has a usefulness value of between 0 and 1.

Other methods adapt the parameters of their classifier as well as their structure and are known as evolving neural networks [AMA 06, COH 05, LEC 03]. In [ANG 04], the evolving Takagi-Sugeno method has been developed to account for the evolution of data. In the evolving Takagi-Sugeno method, a potential function is based on the distance between patterns. The potential of the first point is considered as being equal to one. It establishes the first neuron (or rule) that is considered a prototype (or center) in the first class. The new data may then have a potential that is close to or greater than this neuron. They therefore reinforce or confirm the information contained in the previous neurons. In [AMA 06, LEC 03], the neural network is based on the multi-prototype Gaussian modeling of non-convex classes. The activation function of each hidden neuron determines an observation's degree of belonging to a class prototype. During the initialization of the method, the first pattern allows us to create the first prototype, which constitutes the first class. The prototype is characterized by its center and its initial covariance matrix. Depending on the belonging values of the new acquisitions, the prototype (hidden neuron) can be adapted, eliminated or a new prototype can be created.

We have chosen to develop an approach based on the FKNN method [KEL 85]. This is well known and is often used for automatic learning applications. The method developed is known as the SS-DFKNN method that, as mentioned, allows us to take into account new information about an evolving system, detect unknown classes and estimate their characteristics. Semi-supervised methods are particularly well adapted to evolving systems where classes are not known in advance. As with FKNN, it uses the notion of distance between patterns to classify new data. Two indicators of evolution are calculated in relation to class parameters. These indicators are used in the detection and confirmation phase for class evolutions. During the adaptation phase of classes that have evolved, most informative patterns are selected and the classifier is updated. DFKNN allows us to respond to problems corresponding to dynamic classifiers such as cases of class drift, fusion, splitting and rotating.

11.2. Proposed method for the surveillance and monitoring of a steam generator

In this section, a SS-DFKNN approach that monitors the evolution of dynamic classes is examined. This method [HAR 10] has been developed to detect the evolution of dynamic classes online, adapt their characteristics and detect the appearance of new classes. This version is semi-supervised with the aim of:

– taking into consideration the initial patterns known in a system, such as the learning set X that represents the known classes;

– improve the characteristics of classes using new patterns; and

– detect new classes according to the new characteristics created by the evolution of a system.

The evolution of classes can even be considered in parts of the representation space where no pattern has been learned. In this version, a class that begins to move will retain its initial patterns while patterns corresponding to an evolution of this class will constitute a new class. It is this new class that will allow us to best estimate the system's current functioning mode. The main functioning phases for the SS-DFKNN method are shown in Figure 11.1.

11.2.1. Learning and classification

To start, labeled data are initially learned. For each class learned Ci , at least two patterns must be known to calculate its initial center of gravity image as well as its standard deviation image for each attribute j. These two values will be taken into account when calculating the method's two evolution indicators. For each time t, they are updated incrementally by:

[11.1] images

and:

[11.2] images

where Ni is the number of patterns in Ci before the classification of image and image are the class's variance and center of gravity according to attribute j before the classification of x.

It is also important to specify that for each type of class, image and image can be calculated. In the case of classes of complex patterns, several Gaussian subclasses are created and merged in order to obtain other kinds of classes.

In the method's classification phase, each new pattern is classified sequentially according to its k nearest neighbors. It is therefore necessary to initially define parameter k.

Figure 11.1. The different phases of the SS-DFKNN method

ch11-fig11.1.gif

11.2.2. Detecting the evolution of a class

Two indicators of evolutions are used to detect changes in the characteristics of classes receiving the new pattern. The first indicator ind1j [11.3] allows us to obtain a compactness value for the class:

[11.3] images

ind1j is given as a percentage. image represents the class's initial standard deviation. If at least one of the attributes j obtains a value of ind1j that is greater than the threshold th1, then the class Ci has begun to change its characteristics. th1 can be fixed as a small value such as five, which is effective for monitoring progressive evolutions in a class. The class can evolve suddenly, however, and a bigger value may then be necessary for th1.

The second indicator ind2j represents the drift between the current point and its class' center of gravity in relation to its standard deviation:

[11.4] images

Here, ind2j is given as a percentage. If at least one attribute j obtains a value of ind2j that is greater than th1 (max(ind2j) ≥ th1) then the point has a weak membership value for class Ci . It is not sufficient, however, to have a single point that is a long way from the class in order to consider a change in the class's characteristics. The point can consist of noise and it is therefore necessary to define a value NbMin to represent the successive number of times that ind2j must exceed th1 in order to confirm an evolution. If NbMin is fixed at a significant value, the delay in detecting the class's evolution may be too long. NbMin must be therefore defined as a compromise between the noise found in the representation space's patterns and the maximum possible delay in detecting evolution. As such, the evolution of a class is confirmed when NbMin successive values of the two indicators ind1j and ind2j are greater than th1.

11.2.3. Adapting a class after validating its evolution and creating a new class

When the evolution of a class is confirmed, a new class is created by SS-DFKNN based only on the patterns representative of the evolution.

The different parts of this adaptation phase are:

– The creation of a new class, C', and the selection of patterns representing evolution. For the latter, the last pattern classified, x, is selected as well as its k − 1 nearest neighbors. No new distance needs to be calculated because, during the classification of x, its nearest neighbors have already been identified. Subsequently, the points corresponding to the evolution are based on the most recent change.

– The last classified pattern x and its k − 1 nearest neighbors that are already known are the only patterns kept to create C' .

– The selected k patterns are deleted from class Ci .

– The center of gravity image and the current standard deviation image of class Ciare updated.

image and image are calculated for class C'. These values are quickly calculated because only k points are in the class.

– The number of classes is updated.

This phase allows us to follow the evolutions of a class online. If this latter is complex, a series of Gaussian subclasses could appear during evolution. The splitting and drift of classes is taken into account. If no evolution takes place, the patterns are classified normally.

If class C is considered to be useless, it may be necessary to delete this class C and solve the dataset's growing problem. Equally, two classes must sometimes be merged because they have ended with the same characteristics. The solutions to both of these problems are presented in the following section.

11.2.4. Validating classes

The classes' noise has already been considered with SS-DFKNN using the bias of the threshold NbMin. However, in some cases, it may be necessary to delete one or more classes:

– when a transitory class is created and kept, even after the final class destination is reached. Indeed, a transitory class does not represent a functioning mode;

– when a class that is considered to be noisy is created; and

– when a class containing little information is preserved for a given amount of time.

To consider these cases, suppressing the classes and updating the classifier, SS-DFKNN examines two validity criteria:

– an insufficient number n1 of patterns is contained in the class; and

– no pattern is classified in the class since only n2 patterns have been classified in other classes.

Each class that does not verify these validation criteria will be deleted. It is necessary to specify that this suppression of non-representative classes is not necessary for all applications. For example, for applications considering data from an important or critical nature, it may be preferable to preserve all of the characteristic data from all the classes.

Figure 11.2 shows an example of the suppression of a class. A class is initially known and its evolution occurs and leads the class towards its final destination. In this case, several patterns of the transitory class must be deleted. The parameters of the DFKNN method in this case are: k = 5, th1 = 5, NbMin = 5, n1 = 10, n2 = 20 and thFusion= 0.2.

SS-DFKNN considers the first evolution of the class as a new class. A second class is then created and the transitory class that has therefore been considered non-representative is deleted.

Figure 11.2. Suppression of a class by SS-DFKNN. In a), the patterns of X are denoted by*, the transitory patterns are denoted by + and the patterns of the evolved class are represented by*. b) Classification results obtained by SS-DFKNN, the patterns of the transitory class are deleted

ch11-fig11.2.jpg

In the case of merging classes, the measure of similarity proposed by Frigui [FRI 96] [11.5] has been integrated into SS-DFKNN in order to verify if after each classified pattern whether it is necessary to merge two classes which have a similarity which is greater or equal to thFusion:

[11.5] images

where πi (x) and πz (x) are the membership values of x to Ci and Cz respectively. The nearer δiz is to one, the more similar the two classes are. The maximum value indicates thatthe two classes are overlap completely.

11.2.5. Defining the parameters of the SS-DFKNN method

As for all pattern recognition methods, the identified parameters influence the classifier's performance. We propose default values that are generally adapted to dynamic systems:

k corresponds to the number of neighbors taken into account in k nearest neighbors methods to classify a point. k is the common parameter among k nearest neighbors methods and is the most important. It should be defined according to the size of the database, the noise present in the system's observations and the proximity of classes.

th1 is one of the most important parameters in SS-DFKNN. It is taken into account by the two indicators of evolution of SS-DFKNN. It detects evolutions of classes. A class that does not evolve will always have the same characteristics, even if noise appears. The characteristics of a class change, however, in the case of a sudden or progressive evolution. A value of th1 that is equal to five is a good compromise, which allows us to adapt classes and their characteristics without needing to detect a significant evolution.

NbMin allows us to validate an evolution. NbMin influences the results in terms of the class's adaptation time. It should be defined as at least equal to k (k ≥ 1) with the aim of gaining sufficient representative patterns to fully estimate the characteristics of a new class. It should, also, not be defined by too significant a value as the detection of evolution could be not delayed. NbMin should be defined between k and k + 5 if k is small or large, respectively. These values have been determined by experimentation. If k and NbMin are small, the risk of false alarms becomes greater.

thFusion is an optimization parameter used to merge classes. Even if no fusion appears, the simple appearance of a class indicates that there has been an evolution in the system. In this case, an alarm should be raised to call a human operator to verify the system's state. A value of thFusion that is near to 0.2 allows us to merge classes that have begun to present the same characteristics.

n1 is one of the parameters used in the class validation phase. It should be identified as being greater than k (n1>k), since a class will contain at least k patterns (at its creation). By default, a value of n1 can be k*2.

n2 is one of the parameters used in the class validation phase. Its value should not be too low because after the creation of a class, it may be necessary to wait to classify more patterns in the class that has been created. If no pattern is classified in the new class after a significant number of points, however, the class is neither more nor less representative. This class can then be transitory or can correspond to noise. The value n2 can be defined as 20 by default. This result has also been obtained by testing. This signifies that one pattern in 20 should be classified as being part of the new class with the aim of gradually confirming its usefulness. For the other classes, even if they do not receive additional patterns after a certain amount of time, they will not be suppressed because they have already confirmed their usefulness by having a sufficient number of patterns.

11.3. Results

This application concerns the development of a passive acoustic detection tool for water leaks in the steam generator (SG) of a prototype fast reactor. The aim of this approach is to detect water leaks of different flow and pressure at various locations with a very short delay in time.

The reaction produced by contact between the water and sodium used to cool the reactor's core causes an explosion. To simulate water leaks, argon was injected and the corresponding acoustic signals were recorded using sensors. This dataset has been taken from tests carried out by the Atomic Energy Authority on an SG in a prototype fast reactor. This dataset was then used by the Commissariat à l'Energie Atomique (CEA) — Atomic Energy Commission) to carry out tests. It is in this context that our research has been carried out.

11.3.1. Data analysis

An initial frequential and statistical analysis of the data was carried out to identify the informative and discriminative parameters required to define the representation space. The SS-DFKNN method was then applied using only the data known from a normal functioning class to detect new classes and their evolutions. Each signal had a sampling frequency equal to 2,048 Hz. Figure 11.3 shows an acoustic signal resulting from an injection of argon into the SG.

We have studied a significant number of statistical and frequential parameters. These included the mean root square value, Kurtosis, Skewness, the crest factor, the median value, coefficients ai of the autoregressive dynamic model, etc. These parameters were calculated in a sliding window whose size was calculated by learning. This window should be relatively small in order to contain sufficient data without causing a significant delay in detection.

One of the current methods used by the SG detects the appearance of a problem in six seconds. One of our objectives was therefore to take less than six seconds to detect the argon injection in the sodium. The best window adapted to this application contained 8,192 data samples (4 s) and its sliding window contained 2,048 data samples (1 s). These 2,048 data samples corresponded to a drift of one second in the signal that allowed us to obtain a small delay in surveillance. The use of this sliding window therefore allowed us to follow the evolution of the SG's operation on a set of data online.

In terms of temporal parameters, we have sought to estimate the coefficients ai of the autoregressive model. To do this, we needed to identify the maximal order p in the model. We began by seeking the order of the model for part of the statistical data using the Akaike information criterion and the method of principal component analysis. This criterion allowed us to fix order p of the model to 13. We were then able to estimate these 13 coefficients. Since our objective was not to predict an exact model of the system, however, we chose the most informative coefficients among the 13 examined. Principal component analysis showed that two parameters of the autoregressive model, coefficients a3 and a5, allow us to obtain inertia of 94.3%. These two autoregressive model parameters provided the lowest classification error for the two classes (injection and non-injection). This classification error was calculated using different combinations of parameters using the supervised fuzzy pattern logic (FPM) and support vector machines (SVM) methods. We also selected another parameter, the normalized average, which improved the classification result.

Figure 11.3. The acoustic signal is composed of the injection and non-injection modes, with its injection command represented by the gray line

ch11-fig11.3.jpg

As Figure 11.3 shows, the system's functioning modes evolved when several leaks occurred in succession so the accumulation of leaks therefore led to worsening of the problem. The SS-DFKNN method was used to monitor these evolutions and characterize each of the evolved classes. SS-DFKNN only needed to know information about one class in order to be initialized. The default classes were not all known a priori.

11.3.2. Classification results

The SS-DFKNN method used 53 points taken from the normal functioning (C1) of the SG as a learning set. The patterns of a typical signal (Figure 11.3) are then classified. The patterns and classes corresponding to the anticipated classification result are shown in Figure 11.4. The classification result obtained is shown in Figure 11.5.

Figure 11.4. Patterns corresponding to each injection and non-injection of the signal in Figure 11.3

ch11-fig11.4.gif

When the patterns of the signal in Figure 11.4 are classified, several evolutions of classes occur. Several classes are created and the resulting classes can help system experts characterize the system's different functioning modes. Some transition classes, notably between C2 and C4 (Figure 11.5), have been judged to be non-representative after some time and are therefore automatically discarded. These classes do not allow us to classify other signals' patterns. In terms of this application, the classes found allow us to identify the different functioning modes. The initial class C1 has retained its characteristics and the classes obtained correspond to those estimated using the injection command (see Table 11.1).

Figure 11.5. Classification result obtained with semi-supervised DFKNN after classification of all patterns of the signal in Figure 11.3

ch11-fig11.5.jpg

We can see in Table 11.1 that there is only a small delay in detecting the evolution of each class (a maximum of 2 s). In Figure 11.4, each injection is followed by several transitory points that are part of the repeat injection class. We can therefore say that once classification of the points in an injection class is carried out, the following points must be classified in the repeat injections class.

Table 11.1. Correspondence of the classes found with injections and non-injections. The delay in detecting the evolution of each class using the SS-DFKNN method is also indicated

Number of injections or non-injections Corresponding class found Delay in detecting the evolution (in numbers of windows)
Non-injection C1 0
1st injection C2 1
Repeat injection C4 2
2ndinjection C3 1
3rd injection C5C6 2
4th injection C7 2

11.3.3. Designing an automaton to improve classification rates

To classify the points found in the transitory repeat injection zone (i.e. between injection and non-injection), we need to design an automaton in addition to the SS-DFKNN method. This automaton monitors the cycle of changes between the different modes of functioning (Figure 11.6).

Figure 11.6. Automaton used with SS-DFKNN to follow changes in functioning mode. NI corresponds to non-injection, Ik corresponds to the knth injection, and RI corresponds to the return of injections

ch11-fig11.6.gif

The automaton models designed determined the order of different transitions between normal functioning mode (non-injection) and faulty functioning mode (injection) over the course of the SG's operation. It also models the degrees of failure that are possible when an argon leak is found in the prototype fast reactor's SG. Note that the degree of intensity of injections follows the evolution of this machine, i.e. injection I2 is more important than the injection I1, I3 is greater than I2and I4 is greater than all of the previous injections. If an argon leak occurs in the SG and is significant (I4), the system moves from normal functioning mode (NI) towards mode I4, passing through the modes (I1, I2, I3).

We have obtained the classification results shown in Table 11.2 by including the results of SS-DFKNN classification and the automaton that was discussed previously (see Figure 11.6). As a primary comparison, the classification results obtained by incremental fuzzy pattern matching (IFPM) [SAY 02], incremental support vector machines (ISVM) [CAU 01] and incremental k-nearest neighbors (IKNN) [ZHO 01] are also shown in Table 11.2. These results show the classification error rate, written as errclassif. The classifier's performance decreases significantly as the classes evolve over time. This criterion therefore shows the importance of updating the classifier's parameters in order to maintain its performance level. errclassif is calculated by:

if Cest (xi) = Creal (xi) then e(Cest (xi),Creal (xi)) = 0

otherwise e(Cest (xi),Creal (xi)) =1

[11.6] images

where n is the number of patterns classified, e tells us whether a pattern is well classified, Cest(xi) corresponds to the estimated class for xi and Creal(xi) corresponds to the real class of xi.

Few patterns are misclassified using SS-DFKNN. The most significant result is that the first injection is perfectly detected. The patterns for this injection are well classified, at 98.9 % in less than 2 s. On the basis of this result, it is possible to detect the start of a problem in the SG very quickly. We can also see in Table 11.2 that the other injection and non-injection patterns are well classified, at 96.8%. This allows us to use this method to monitor the SG's functioning modes online.

Table 11.2. Classification results obtained by SS-DFKNN (k=5; th 1=5; NbMin=6; thFusion=0,2; n1=10; and n2=20) in coordination with a machine by IFPM (h=5), ISVM (Gaussian kernel with variance=10 and regulation constant=5) and by IKNN (k=5)

ch11-tab11.2.gif

11.4. Conclusion and perspectives

The SS-DFKNN method has been developed in this chapter for the surveillance and monitoring of evolving systems. SS-DFKNN integrates two indicators of evolution that allow us to detect changes in classes' characteristics in order to correctly estimate the system's current mode of functioning. These evolved classes allow us to more precisely anticipate how functioning modes will change over time and monitor evolutions in complex classes (defined by several subclasses). SS-DFKNN only requires a few patterns in order to be initialized but the more representative the learning set is of classes' characteristics, the more evolutions will be correctly carried out. Classes' characteristics are therefore sequentially refined with the classification of new patterns.

SS-DFKNN has been applied to data from a fast neutron reactor's SG. The SS-DFKNN method is well adapted to systems whose abnormal mode of functioning cannot be known in advance. It allows us to examine the progressive evolution of each class and forget classes that are no longer useful. Nevertheless, this method requires us to define several parameters such as th1, thFusion, NbMin, n1 and n2. It is for this reason that the development of a mechanism that enables the dynamic and adaptive identification of these parameters is an important aspect of this research

11.5. Bibliography

[AMA 06] AMADOU BOUBACAR H., Classification dynamique de données non stationnaire: Apprentissage séquentiel des classes évolutives, PhD thesis, Université des sciences et technologies de Lille, France, 2006.

[ANG 04] ANGELOV P.P., “A fuzzy controller with evolving structure”, Information Sciences, vol. 161, no. 1–2, pp. 21–35, 2004.

[ANG 10] ANGELOV P.P., FILEV D., KASABOV N., Evolving Intelligent Systems – Methodology and Applications, John Wiley and Sons, New York, 2010.

[ANG 00] ANGSTENBERGER L., Dynamic fuzzy pattern recognition, Dissertation, Fakultät für Wirtschaftswissenschaften der Rheinisch-Westfälischen Technischen Hochschule, Aachen, Germany, 2000.

[CAU 01] CAUWENBERGHS G., POGGIO T., “Incremental and decremental support vector machine learning”, Advances in Neural Information Processing Systems, MIT Press, Cambridge, MA, vol. 13, pp. 409–415, 2001.

[COH 05] COHEN L., AVRAHAMI G., LAST M., “Incremental Info-Fuzzy Algorithm for Real Time Data Mining of Non-Stationary Data Streams”, TDM Workshop, Brighton, UK, 2005.

[FRI 96] FRIGUI H., KRISHNAPURAM R., “A robust algorithm for automatic extraction of an unknown number of clusters from noisy data”, Pattern Recognition Letters, vol. 17, pp. 1223–1232, 1996.

[GIB 94] GIBB W.J., AUSLANDER D.M., GRIFFIN J.C., “Adaptive classification of myocardial electrogram waveforms”, IEEE Transactions on Biomedical Engineering, vol. 41, pp. 804–808, 1994.

[HAR 10] HARTERT L., SAYED-MOUCHAWEH M., BILLAUDEL P., A Semi-supervised Dynamic Version of Fuzzy K-Nearest Neighbours to Monitor Evolving Systems, Springer-Verlag, Berlin-Heidelberg, 2010.

[KEL 85] KELLER J.M., GRAY M.R., GIVENS J.A., “A fuzzy k-nn neighbor algorithm”, IEEE Trans. Syst. Man Cybern., vol. SMC-15, no. 4, pp. 580–585, 1985.

[LEC 03] LECOEUCHE S., LURETTE C., “Auto-adaptive and dynamical clustering Neural network”, ICANN 2003, Istanbul, Turkey, Proceedings, pp. 350–358, 2003.

[LUG 11a] LUGHOFER E., Evolving Fuzzy Systems – Methodologies, Advanced Concepts and Applications, Springer, Berlin-Heidelberg, 2011.

[LUG 11b] LUGHOFER E., ANGELOV P.P., “Handling drifts and shifts in on-line data streams with evolving fuzzy systems”, Applied Soft Computing, vol. 11, no. 2, pp. 2057–2068, 2011.

[KAS 07] KASABOV N., Evolving Connectionist Systems: The Knowledge Engineering Approach, Second Edition, Springer Verlag, London, 2007.

[NAK 97] NAKHAEIZADEH G., TAYLOR C., KUNISCH G., “Dynamic supervised learning. Some basic issues and application aspects”, Classification and Knowledge Organization. Springer Verlag, pp. 123–135, 1997.

[SAY 02] SAYED-MOUCHAWEH M., DEVILLEZ A., LECOLIER V.G., BILLAUDEL P., “Incremental learning in fuzzy pattern matching”, Fuzzy Sets and Systems, vol. 132, no. 1, pp. 49–62, 2002.

[ZHO 01] ZHOU S., Incremental document classification in a knowledge management environment, Thesis, University of Toronto, 2001.

 

 

1 Chapter written by Laurent HARTERT, Moamar SAYED-MOUCHAWEH and Danielle NUZILLARD.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.174.253