Mastering chaos: the Lorenz attractor model

In order to get good simulated test data, we are using the so-called Lorenz attractor model.

Let's start with a little sample use case. Consider the task of detecting anomalies in vibration (accelerometer) sensor data measuring a bearing. In order to obtain such a data stream, we simply attach such a sensor to or near the bearing:

The Lorenz attractor model is a physical model capable of generating a three-dimensional data stream, which we will use in place of a real accelerometer sensor stream. By the way, Lorenz was one of the founders of the chaos theory.

Now our task is to detect anomalies using a neural network, basically predicting when a bearing is about to break.

Therefore, we can switch the test data generator between two states, healthy and broken. This is a phase plot showing the three vibration dimensions on a time series:

We can observe the same when changing the parameters of the physical model slightly in order to get a faulty state:

For those not familiar with phase plots, let's use a run chart for the three time series, again in the healthy state:

This is in the faulty state:

One common technique is to transform these data from the time to the frequency domain using discrete Fourier transformation (DFT ) or wavelets.

Again, the FFT for the healthy state is shown here:

And this is for the faulty state:

We can clearly observe that the faulty state has more energy and additional frequencies present. So, this would be sufficient to train a classifier, as we learned before, but we can do better. Let's construct a system capable of learning normal behavior from data, and once it sees data (or sequential patterns) it hasn't seen before, have it raise an alert.

Such a system is an LSTM-based auto-encoder as shown in the following figure:

This basically compresses the vast amount of data through a neural bottleneck in order to try to reconstruct the very same data it has seen but, of course, via a bottleneck losing vast amounts of irrelevant information.

Therefore, such a neural network will learn how a system normally behaves, and as soon as new patterns are seen, it will have a hard time reconstructing the data, and therefore, we can raise an alert.

The LSTM layers keep track of the inherent state, basically making the neural network aware of the time domain by adding memory to the neurons. We will now implement this system end-to-end from the sensor to the insight, so stay tuned!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.3.167