Chapter 10

Detection/Classification of Argon and Water
Injections into Sodium into an SG of a Fast
Neutron Reactor 1

,

 

 

 

10.1. Context and aims

In the context of the research meant to develop a fourth-generation nuclear power plant, the use of liquid sodium as coolant is investigated. This solution would necessitate the development of specific monitoring tools. The reaction between (pressurized) water and sodium, a potential identified risk in steam generators (SGs), is a major issue in such a case.

To increase our knowledge about the acoustic response created by the reaction between sodium and water, we have studied signals measured during a specific experiment done at the shutdown of a PFR fast neutron reactor nuclear station in the UK in 1994. To simulate this chemical reaction, the AEA carried out tests using injections of argon and water into the liquid sodium in one of the SGs. This experiment aimed at testing the ability of the acoustic monitoring devices to detect a reaction between sodium and water (in case of a leak in one SG's tubes).

The study of these data, provided by the CEA [ORI 96, ORI 97a, ORI 97b], has a dual aim. On the one hand, it sought to assess the sensitivity of the acoustic monitoring system in the case of water-sodium reaction. On the other hand, it tried to compare the detectors' response to injections of water with those obtained using argon to determine the possibility, for monitoring system periodic controls, to simulate sodium-water reactions (consequences of which, are problematic for a new installation) using injections of argon into the sodium (as these injections do not produce heat emissions, their effect on the equipment is not problematic). As a result, the approach based on this substitution also allows us to test and characterize new monitoring systems designed for new generations of reactor.

The chapter first focuses on monitoring acoustic signals. Second, the problem of detecting and isolating injections of water and argon is introduced and the results of the accompanying feasibility study are presented. The third section introduces the characterization of the injection signals and the results of the obtained classifications. Finally, the concluding section summarizes the results of this study into detecting the reaction between sodium and water as well as the main differences between injections of water and argon observed with this data set.

10.2. Data

The monitoring device is composed of two passive acoustic sensors. Each record corresponds to a test and contains the 2 kHz sampled response of the two sensors and the command signal for injecting water or argon. The signals were filtered before analysis to reduce the number of bursts that disrupt them (Figure 10.1). Each test can contain from one to seven injections. Overall, the database allows the isolation of 43 argon injections and 30 water injections. The durations of injection and the fluid flow vary considerably among trials (from 10 to 600 s and 0.22 to 28.5 g/s).

Given the nature of the injection device [ORI 96, ORI 97a, ORI 97b], the delay between the injection command and the beginning of the fluid injection into sodium can vary greatly (ranging from several to 50 s). Therefore, it is very difficult to make any conclusion on detection delays because the injection starting time is known to have a very poor precision.

Detection of injections requires us to be able to classify the corresponding background noise at the nominal functioning stage. However, this can only be recorded during a fairly brief time lapse (a few seconds at most), which precedes the first injection in each test. As a result, little is known about it. The noise recorded after an injection is not usable as a noise reference because it is disrupted by the dispersal of residual bubbles. The brevity of the background noise recording is one of the main difficulties of this study.

A time-dependent spectral study has been carried out to analyze the temporal evolution of the spectral content of background noise (signal-off injection). Given the signal’s slow evolution, this study has been carried out using spectrograms. It shows that, from a local point of view, the signal’s frequential content varies over time, but on a larger scale it is independent of time (Figure 10.2). The signals are therefore assumed to be stationary on the points of interest (background noise, injection of water, injection of argon, etc.).

Figure 10.1. Noise signals

ch10-fig10.1.gif

10.3. Online (sequential) detection-isolation

In this section, we examine the issue of online or sequential detection-isolation. This is followed by a brief analysis of existent methods of abrupt change detection-isolation in random processes. Finally, a recursive detection-location algorithm for the injection of argon or water into sodium in an SG and some experimental results concerning it are presented.

10.3.1. Formulating the practical problem

The problems requiring a solution are as follows:

– detection of argon or water injections into the sodium of a fast neutron reactor's SG;

– identification of the type of injection (argon or water);

with shortest possible detection-isolation delay under constraints on false alarms and false isolations.

Figure 10.2. Spectrograms of the signal pfr56

ch10-fig10.2.jpg

We will present a model based on the hypothesis that the measures taken from each sensor (Yk)k≥1 are defined by the following autoregressive-moving average (ARMA) equation (p, q):

images

A typical signal sample, prf44, after preprocessing by decimation and pre-filtering is shown in Figure 10.3. The point k0 of an abrupt change or “rupture” in the model is marked by a vertical line. The change in the parameter vector is therefore:

images

where {ai, bi,σ2} are the parameters of the ARMA model under different hypotheses.

Figure 10.3. Water sample pfr44: injection of water

ch10-fig10.3.gif

10.3.2. Formulating the statistical problem

Statistically speaking, the problem of sequential change detection-isolation can be represented as follows. Let us assume that there is a finite family of distributions images = {Pi, i = 0,…,K − 1} (with K > 2 hypotheses), whose densities are {fi, i = 0,…, K − 1}, where (Yk)k≥1 is a sequentially observed independent random sequence:

images

The change point k0 and the type of change (identified by its label l) are unknown. The problem consists of detecting and localizing (identifying) the ruptures observed in the sequence (Yk)k≥1. The abrupt change detection-isolation algorithm calculates a pair (N, ν) using the Y1, Y2,…, where N is the point of detection and isolation of the rupture and ν Є (1, K − 1) represents the final decision. The aim is to detect/identify a rupture with as short a delay and as few false alarms and isolations as possible.

10.3.3. Non-recursive approach

10.3.3.1. Optimality criterion

Let image be the distribution of observations:

image

where Y1,…,Yk0−1 ~ P0 and Yk0,Yk0+1,… ~ Pl. Examining the following criterion [NIK 95a, NIK 95b], we want the worst average detection delay:

[10.1] images

to be as low as possible in the following class:

[10.2] images

where N(1), N(2),…, N(m) is a series of false alarms, T is the minimal average time before a false alarm, and β1 is the maximum probability of false isolation (exclusion).

10.3.3.2. Non-recursive test

The pair (N nr, ν nr) (nr = non-recursive) is given by:

[10.3] images

Using the following formula, we define Nlnr :

[10.4] images

where hl,j are the detection-isolation thresholds.

10.3.3.3. Non-recursive test performance

The lower bound for the mean detection delay is defined by Theorem 10.1 (for further details, see [NIK 95a, NIK 95b]).

THEOREM 10.1.- Defining the lower bound:

images

of the worst mean detection delay in the class К. The result is that:

images

when T → ∞ and β1 → 0 under the condition that Tβ1 is approximately a constant, with:

images

and

images

THEOREM 10.2.- The non-recursive algorithm [10.3]–[10.4] is asymptotically optimal in the class К.

The optimality criterion [10.1]–[10.2] is generalized in the case of the dependent observations (Yk)k≥1 (e.g. in the case of ARMA models) in Lai's article [LAI 00].

The presently developed theory nevertheless has two disadvantages: its calculation charge (i.e. the number of “elementary” likelihood ratios at each given point) is significant and the optimality criterion does not take into account the probability of false isolation for a given change point (k0 > 1). The simulations show that this probability strongly depends on the hypotheses' mutual “geometry” [NIK 00].

The sequential Bayesian detection-isolation is examined by [MAL 99, LAI 00]. The multi-hypothesis approach based on a sequential Shiryayev test has been proposed by Malladi and Speyer [MAL 99] adopting a dynamic programming approach.

Let us consider that Q = {q1,…, qk−1} is a a priori distribution of hypotheses after the abrupt change occurrence. Supposing that this distribution is independent of k0 and fixing an (a priori) distribution π of the change point k0, Lai has proposed the following optimality criterion [LAI 00]:

[10.5] images

We can see here that Pr0(N ≤ k0 − 1) = Prlk0 (N ≤ k0 − 1) because the event {N ≤ k0 − 1} only depends on the observations Y1, … ,Yk0−1 and, therefore, the law PPrlk0 gives the same distribution of Y1, …, Yk0−1 as the law P0. Lai [LAI 00] has established an asymptotic lower bound for an average “positive” delay to detection-isolation of each type of rupture 1 ≤ lK − 1 (using the Bayesian approach with α → 0)

[10.6] images

Lai [LAI 00] has also introduced a second non-Bayesian approach based on a sequential window-limited test of size m, i.e. based on Ytm+1,… ,Yt. For some kinds of application that are vital for security, we need to guarantee the probability of false alarms and isolations for a given time window (mα) that is upper bounded by a given constant. The following lower bound has been set for the average “positive” delay for each type of abrupt change 1 ≤ lK − 1 (when α → 0):

[10.7] images

uniformly for k0 ≥ 1 under the following constraints:

images

and

images

for 1 ≤ lK − 1

10.3.4. Recursive approach

Let us now examine another, recursive, approach, using a very low calculation charge that imposes constraints on the probability of false isolation when k0 > 1 [NIK 00].

10.3.4.1. Optimality criterion

The first modification involves a new definition of the mean detection delay. In contrast to [10.1], it is defined now as:

images

instead of the previous definition:

images

i.e. we can now estimate the mean detection delay using the following equation:

[10.8] images

The second modification is more important. Let us consider the following mode of observation: after a false alarm Nr(m), we restart the algorithm (Nr, νr) at the point n = Nr(m) + 1. Therefore, we estimate the minimum mean time before the false alarm and the probability of false isolation using the following equations:

[10.9] images

where image

10.3.4.2. Recursive test

This recursive algorithm is given by the pair (Nr, νr), with:

[10.10] images

The stopping time Nlr is defined by the recursive formulas:

[10.11] images

where g0,0 (n) ≡ 0, x+ = max(0, x) and

[10.12] images

where hd is the detection threshhold and hl is the isolation threshold.

10.3.5. Practical algorithm

The aim is to identify the type of anomaly (the injection of water or argon) as soon as possible with few false alarms and false isolations.

After pretreatment, a recursive detection-isolation test [NIK 00] is carried out on each time step for each hypothesis and sensors. On the basis of residual errors ek b, ek e, and ek a of ARMA, we calculate likelihood ratios to test the two alternative hypotheses imagese and imagesa against the base hypothesis imagesb:

[10.13] images

[10.14] images

the detection functions:

[10.15] images

and isolation (classification) functions:

[10.16] images

The decision rule for the point k is the following:

– The hypothesis imagese is said to be accepted if the following conditions are satisfied:

Gk,ehd and Lk,ehl

– The hypothesis imagesa is accepted if the following conditions are satisfied:

Gk,ehd and Lk,ehl

where the constants hd > 0 and hl > 0 are fixed beforehand.

10.3.6. Experimental results

The sequential detection-isolation procedure defined in the previous section has been applied to a number of samples from SG data. The parameters of the ARMA noise model {a1, b1, σ2}b have been estimated using the first 40 s for each sample. The first tests provide encouraging results. The behavior of the detection-isolation functions Gk,e, Gk,a, Lk,e, and Lk,e is shown in Figure 10.4 in the case of an injection of argon with k0 = 60 s.

Figure 10.4. Sample pfr40 arg: argon injections. The detection-isolation functions of the sequential test

ch10-fig10.4.gif

10.4. Offline classification (non-sequential)

10.4.1. Characterization and approach used

At the first part of the study, the detection capacity is not in doubt. However, the identification ability shows that the system's response to water injections is not identical to the response for argon injections. There is a wide range of options available to analyze these differences. Given that the signal stationarity hypothesis is fairly well confirmed, the results of an initial study [ORI 97b] have led us to analyze the spectral content of the injection signals.

10.4.2. Initial characterization

Figures 10.5 and 10.6 represent the mean normalized amplitude spectrums for the signals for argon and water injection for the two sensors. The mean normalized amplitude spectrum SM((k/N)Fe) has been calculated as follows:

images

with

images

and

images

where NB is the number of injections, Fe the sampling frequency, Te = 1/Fe, sp(t) is the p-th injection, Mp the number of samples from the signal sp , and j2 = −1. The difference between the mean spectrums is low. Comparison of the two spectrums taken from the two sensors shows a large amount of similarity and an almost identical structure (same modes). The SG seems to behave as a musical instrument that is affected by a variety of sounds. The reasoning modes and their harmonics are therefore affected in a similar way by injections of water and argon. The small differences observed can therefore be explained by the physical properties of the reactions between sodium and water and the reactions between sodium and argon that are not always completely identical.

Further examination of the spectrums shows that, on mean, the amplifiers' responses are greater at low frequencies with injections of water while high frequency components (from approximately 400 Hz) are more significant for injections of argon. Furthermore, we can see that interspectral disparity is significant and is more important than the difference between water and argon (Figure 10.7). This statement seems to contradict the conclusion in first part of the study and shows that characterization can be crucial for discriminating between the two types of injections.

To understand the previously observed discrimination, for each sensor we have separated the transients corresponding to the injections in 25 frequency ranges centered on each mode and calculated the energy in these previously defined energy ranges.

Figure 10.5. Mean normalized amplitude spectrum of argon and water injections

ch10-fig10.5.gif

Figure 10.6. Mean normalized spectrum of argon and water injections

ch10-fig10.6.gif

Figure 10.7. Mean normalized amplitude spectrum for the two different argon injection signals

ch10-fig10.7.gif

To avoid considering the signal's amplitude that can vary significantly depending on the position of the leak and its size, the spectral amplitude of each mode is calculated after normalizing the amplitude spectrum according to its content in the frequency range 200–1,024 Hz. The restriction on the normalization calculation time for this range was set after reading the previous CEA reports. These testified to the appearance of components between 50 and 150 Hz that are likely to be in relation with the alternating current's frequency. By restricting the normalizing range, these components are excluded and do not affect the result.

10.4.3. Effective features

To identify more precisely the effective features, an analysis of the distribution of features was carried out in two stages. Since the two sensors are similar, we have considered to keep the same features for each of the sensors.

During the first stage, the feature values obtained from the 73 samples have been selected by observing marginal distributions. This suboptimal step was chosen due to the low number of examples in the database. It allowed us to identify nine features per sensor showing different marginal distributions for the two classes to be characterized (Figure 10.8).

Figure 10.8. Normalized amplitude in the 25 identified frequency ranges – 30 injections of water followed by 43 injections of water – sensor 2

ch10-fig10.8.jpg

Features correspond to the following ranges: frequency range numbers 5(45, 64) 6(65, 89) 7(90, 119) 10(160, 184) 11(185, 224) 12(225, 256) 18(430, 489) 19(490, 549) 21(600, 679). These are ranges where a rupture can be seen in the figure between the 30 initial values (water) and the following 43 (argon) values.

The resulting representation space therefore has 18 spectral features (9 × 2 sensors) for 73 examples. No other feature has been added. We will see in the following section that this characterization allows us to obtain convincing classification results.

10.4.4. Classification

The aim is to identify the most effective characteristics among the nine selected features.

Given the relatively limited number of features (nine per sensor) and the fact that we have chosen to keep the same features for each sensor, it was possible to apply an exhaustive research method. The method consisted of creating all the possible combinations of the features (in the order of around 29 − 1) making a decision rule, evaluating its performance, and identifying the best groups of features from these results.

Learning the decision rule was achieved using a two-class support vector machine (SVM) [CRI 00, BOS 92, COR 95]. This type of method consists of identifying a hyperplane separating the two classes. Therefore, we want to create a decision rule in the form images(x) = sign (g(x)) with g(x) = 〈x, w〉 + b, where w is a vector normal to the separating hyperplane, b is a threshold and xis an observation of imagesd. The risk r that must be minimized is given by:

images

where ξi is a real positive whose value is in relation with the risk of a bad classification of xi, an element of the learning set imagesn.

The parameter C allows one to control the trade off between the training error and the model complexity. A small value for C will increase the number of training errors, while a large C will lead to a solution that is complex.

The optimization problem to determine the function g(x) can be written in the form:

images

where yi = −1 if xiω0 and yi = 1 if xiω1.

The solution to this problem is obtained by resolving the dual problem that can be expressed by the following quadratic form:

[10.17] images

and images

The analytical form of the decision function g is given by:

images

These results can be generalized as nonlinear classification problems. The idea is to apply a linear discrimination method such as that described previously in a transformed space imagesq with a dimension q. This transformed space is obtained by mapping the data from the initial space using a parameter function h: φh:imagesdimagesq with q > d. Since the optimization problem and the decision function are expressed only according to the Lagrange multipliers α and the scalar products 〈xk,xl〉, the problem expressed in the transformed space only requires the definition of a scalar product. Every function Kh verifying the Mercer theorem can be used to calculate the scalar product in a transformed space without expressing the transformation function φh. The decision function in this case is written in the form:

images

where bis the bias and xiimagesd.

The kernel functions Kh generally depend on a parameter h that has significant influence on the class of detectors obtained for a set of values for the parameter C. Thus, it is required to determine C and h. In practice, the cross-validation method is commonly employed.

10.4.5. Performance evaluation

The performance of a learned rule is evaluated by estimating the probability of error. An empirical estimator is often used (ratio of the number of errors to the number of examples tested). To avoid a learning bias, the test data set is generally chosen independent of the learning data set. The most common performance evaluation method is V cross-validation [BRE96]. This consists of randomly dividing the learning setimagesn into V mutually exclusive subsets of approximately identical sizes: imagesn = images1 ∪ images2 ∪ … ∪ imagesV. Typically, V = 5 or 10. At stage k, the decision rule is learned for the reduced learning set imagesnimagesk, and the performance criterion (the probability of error in our case) is estimated usingimagesk. Estimation of the global criterion is achieved by empirically averaging the estimated values obtained for each set imagesk. This cross-validation method allows us to decrease the estimation bias in comparison with the simple validation method. A particular type of V cross-validation is the leave-one-out procedure for which V = n.

10.4.6. Experimental results

10.4.6.1. Kernels

The kernel functions Kh selected for the SVMs are Gaussian functions and have the form:

images

The decision function depends on two parameters (C, h) that are optimized on a set of possible values. Given the small number of examples, the performance has been evaluated by the leave-one-out procedure [DEV 82].

10.4.6.2. Results

The best result led to an error of classification which is equal to zero, that is imagese=0.

Several subspaces lead to this result. Among them, the spaces with the smallest dimension have three features per sensor (six in total). Table 10.1 shows the different frequency ranges for which this result has been obtained.

Table 10.1. Subsets of features leading an estimation of zero error

Space Frequency ranges used
E1 (185, 224) – (490, 549) – (600, 680)
E2 (90, 119) – (185, 224) – (600, 680)
E3 (90, 119) – (160, 184) – (430, 489)
E4 (65, 89) – (185, 224) – (600, 680)
E5 (45, 64) – (185, 224) – (600, 680)

These results confirm that despite the close similarity between the spectrums, the injections of argon and water can be clearly differentiated. Notably, they show that the joint distributions of features in the five subspaces differs significantly from one class to another.

A study based on the PCA (principal component analysis) has confirmed the discriminating nature of these subspaces (Figure 10.9).

We hypothesize that the detection methods have fulfilled their role and we therefore seek to differentiate between injections of water and argon. This problem can be formalized as a dual hypothesis test imagese against imagesa. Therefore, this consists of developing a decision rule that allows us to carry out this learning test using the available data.

Figure 10.9. Analysis of the main components in the five selected representation spaces – projection on the two initial inertia axes – sensor 2

ch10-fig10.9.gif

10.5. Results and comments

Several examples of detection-isolation results are shown, which clearly demonstrate the capacity of the tested device to correctly detect instationarities. Classification performance is indicated for several representation spaces and the sensitivity to learning parameters is discussed as well as the possible interpretation of features that allow us to distinguish between the classes “argon” and “water”.

10.6. Conclusion

The research presented shows the possibility of detecting a reaction in sodium using acoustic sensors. The detection delay guaranteeing a high detection rate has not been evaluated, given the scarce amount of data and uncertainty on the time of injections (the command-injection delay is highly variable). They also show that acoustic emissions of water and argon injections are significantly different at some frequencies. These results show that argon is not a perfect substitute for water when testing the calibration of a water-sodium reaction detection system. It is necessary to better understand these differences and limit their impact. However, these results also show that the passive acoustic detection device is highly sensitive and that it carries important information on the reactor's operation.

10.7. Bibliography

[BEL 61] BELLMAN R., Adaptive Control Processes: A Guided Tour, Princeton University Press, Princeton, 1961.

[BLU 97] BLUM P.L.A., “Selection of relevant features and examples in machine learning”, Artificial Intelligence, vol. 97, nos. 1–2, pp. 245–271, 1997.

[BOS 92] BOSER B., GUYON I., VAPNIK V. , Training Algorithm for Optimal Margin Classifiers, Pittsburgh, PA, pp. 144–152, 1992.

[BRE 96] BREIMAN L., “Bagging predictors”, Machine Learning, vol. 24, no. 2, pp. 123–140, 1996.

[COR 95] CORTES C., VAPNIK V., “Support-vector networks”, Machine Learning, vol. 20, no. 3, pp. 273–297, 1995.

[CRI 00] CRISTIANINI N., SHAWE-TAYLOR J., An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods, Cambridge University Press, Cambridge, 2000.

[DEV 82] DEVIJVER P. A ., KITTLER J., Pattern Recognition: A Statistical Approach, Prentice-Hall, London, 1982.

[DUB 90] DUBUISSON B., Diagnostic et reconnaissance des formes, Hermès, Paris, 1990.

[LAI 98] LAI T.L., “Information bounds and quick detection of parameter changes in stochastic systems”, IEEE Transactions on Information Theory, vol. 44, no. 7, pp. 2917– 2929, 1998.

[LAI 00] LAI T.L., “Sequential multiple hypothesis testing and efficient fault detection-isolation in stochastic systems”, IEEE Transactions on Information Theory, vol. 46, no. 2, pp. 595–608, 2000.

[MAL 99] MALLADI D.P., SPEYER J.L., “A generalized Shiryayev sequential probability ratio test for change detection and isolation”, IEEE Transactions on Automatic Control, vol. 44, no. 8, pp. 1522–1534, 1999.

[NIK 95a] NIKIFOROV I., “A generalized change detection problem”, IEEE Transactions on Information Theory, vol. 41, no. 1, pp. 171–187, January 1995.

[NIK 95b] NIKIFOROV I., “On two new criteria of optimality for the problem of sequential change diagnosis”, Proceedings of the American Control Conference, Seattle, WA, pp. 97–101, 1995.

[NIK 00] NIKIFOROV I., “A simple recursive algorithm for diagnosis of abrupt changes in random signals”, IEEE Transactions on Information Theory, vol. 46, no. 7, pp. 2740–2746, November 2000.

[ORI 96] ORIOL P.G.L., Analyse des enregistrements acoustiques des essais d'injections PFR en vue de la caractérisation de la réaction sodium-eau, CEA, Technical Report, June 1996.

[ORI 97a] ORIOL L., Analyse basse fréquence des essais PFR, CEA, Technical Report, December 1997.

[ORI 97b] ORIOL S.E.L., DEMARAIS R., Base de données numérique des enregistrements acoustiques des injections d'argon et d'eau faites dans un GV de PFR, CEA, Technical Report, December 1997.

[VAP 95] VAPNIK V. , The Nature of Statistical Learning Theory, Springer Verlag, New York, 1995.

[VAP 98] VAPNIK V. N . , Statistical Learning Theory, John Wiley & Sons, New York, 1998.

 

 

1 Chapter written by Pierre BEAUSEROY, Edith GRALL-MAËS and Igor NIKIFOROV.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.131.10