© Michael Paluszek, Stephanie Thomas  2017

Michael Paluszek and Stephanie Thomas, MATLAB Machine Learning, 10.1007/978-1-4842-2250-8_6

6. Machine Learning Examples in MATLAB

Michael Paluszek and Stephanie Thomas1

(1)New Jersey, USA

6.1 Introduction

The remainder of the book provides machine learning examples in MATLAB that span the technologies discussed. Each example provides a useful application in its own right. Full source code is provided. In each case the theory behind the code is provided. References for further study are provided. Each example is self-contained and addresses one of the autonomous learning technologies discussed earlier in the book. You can jump around and try the examples that interest you the most.

As we explained earlier, autonomous learning is a huge field. There are many benefits from knowing all aspects of the field. Those with experience in any one of the applications may find the examples to be straightforward. Topics outside your area of expertise will be more challenging. Much like cross-training in the gym, working in other areas will help you in your own area of expertise.

6.2 Machine Learning

We present three types of machine learning algorithms. In each case we present a simple algorithm to achieve the desired results.

6.2.1 Neural Networks

This example will use a neural network to classify digits. We will start with a set of six digits and create a training set by adding noise to the digital images. We will then see how well our learning network performs at identifying a single digit, and then add more nodes and outputs to identify multiple digits with one network. Classifying digits is one of the oldest uses of machine learning. The U.S. Post Office introduced zip code reading years before machine learning started hitting the front pages of all the newspapers! Earlier digit readers required block letters written in well-defined spots on a form. Reading digits off any envelope is an example of learning in an unstructured environment.

6.2.2 Face Recognition

Face recognition is available in almost every photo application. Many social media sites, such as Facebook and Google Plus, also use face recognition. Cameras have built-in face recognition, though not identification, to help with focusing when taking portraits. Our goal is to get the algorithm to match faces, not classify them. Data classification is covered in the next chapter.

There are many algorithms for face identification, and commercial software can use multiple algorithms. In this application, we pick a single algorithm and use it to identify one face in a set of photographs—of cats.

Face recognition is a subset of general image recognition. The chapter on neural networks, Chapter 9, gives another example. Our example of face recognition works within a structured environment. The pictures are all taken from the front and the picture only shows the head. This makes the problem much easier to solve.

6.2.3 Data Classification

This example uses a decision tree to classify data. Classifying data is one of the most widely used areas of machine learning. In this example, we assume that two data points are sufficient to classify a sample and determine to which group it belongs. We have a training set of known data points with membership in one of three groups. We then use a decision tree to classify the data. We’ll introduce a graphical display to make understanding the process easier.

With any learning algorithm it is important to know why the algorithm made its decision. Graphics can help you explore large data sets when columns of numbers aren’t terribly helpful.

6.3 Control

Feedback control algorithms inherently learn about the environment through measurements used for control. These chapters show how control algorithms can be extended to effectively design themselves using measurements. The measurements may be the same as used for control but the adaptation, or learning, happens more slowly than the control response time. An important aspect of control design is stability. A stable controller will produce bounded outputs for bounded inputs. It will also produce smooth, predictable behavior of the system that is controlled. An unstable controller will typically experience growing oscillations in the quantities (such as speed or position) that are controlled. In these chapters we explore both the performance of learning control and the stability of such controllers.

6.3.1 Kalman Filters

The Kalman filters chapter, Chapter 10, shows how Kalman filters allow you to learn about dynamical systems for which we already have a model. This chapter provides an example of a variable-gain Kalman filter for a spring system. That is a system with a mass connected to its base via a spring and a damper. This is a linear system. We write the system in discrete time. This provides an introduction to Kalman filtering. We show how Kalman filters can be derived from Bayesian statistics. This ties it into many machine learning algorithms. Originally, the Kalman filter, developed by R. E. Kalman, C. Bucy, and R. Battin, was not derived in this fashion.

The second section adds a nonlinear measurement. A linear measurement is a measurement proportional to the state (in this case position) it measures. Our nonlinear measurement will be the angle of a tracking device that points at the mass from a distance from the line of movement. One way is to use an unscented Kalman filter (UKF) for state estimation. The UKF lets us use a nonlinear measurement model easily.

The last part of the chapter describes the UKF configured for parameter estimation. This system learns the model, albeit one that has an existing mathematical model. As such, it is an example of model-based learning. In this example the filter estimates the oscillation frequency of the spring-mass system. It will demonstrate how the system needs to be stimulated to identify the parameters.

6.3.2 Adaptive Control

Adaptive control is a branch of control systems in which the gains of the control system change based on measurements of the system. A gain is a number that multiplies a measurement from a sensor to produce a control action such as driving a motor or other actuator. In a nonlearning control system, the gains are computed prior to operation and remain fixed. This works very well most of the time since we can usually pick gains so that the control system is tolerant of parameter changes in the system. Our gain “margins” tell us how tolerant we are to uncertainties in the system. If we are tolerant to big changes in parameters, we say that our system is robust.

Adaptive control systems change the gain based on measurements during operation. This can help a control system perform even better. The better we know a system’s model, the tighter we can control the system. This is much like driving a new car. At first you have to be cautious driving a new car because you don’t know how sensitive the steering is to turning the wheel or how fast it accelerates when you depress the gas pedal. As you learn about the car you can maneuver it with more confidence. If you didn’t learn about the car, you would need to drive every car in the same fashion.

This chapter starts with a simple example of adding damping to a spring using a control system. Our goal is to get a specific damping time constant. For this we need to know the spring constant. Our learning system uses a fast Fourier transform to measure the spring constant. We’ll compare it to a system that does know the spring constant. This is an example of tuning a control system.

The second example is model reference adaptive control of a first-order system. This system automatically adapts so that the system behaves like the desired model. This is a very powerful method and applicable to many situations.

The third example is longitudinal control of an aircraft. We can control the pitch angle using the elevators. We have five nonlinear equations for the pitch rotational dynamics, velocity in the x-direction, velocity in the z-direction, and change in altitude. The system adapts to changes in velocity and altitude. Both change the drag and lift forces and the moments on the aircraft and also change the response to the elevators. We use a neural net as the learning element of our control system. This is a practical problem applicable to all types of aircraft ranging from drones to high-performance commercial aircraft.

Our last example will be ship steering control. Ships use adaptive control because it is more efficient than conventional control. This example demonstrates how the control system adapts and how it performs better than its nonadaptive equivalent. This is an example of gain scheduling.

6.4 Artificial Intelligence

Only one example of artificial intelligence is included in the book. This is really a blending of Bayesian estimation and controls. Machine learning is an offshoot of artificial intelligence so all the machine learning examples could also be considered examples of artificial intelligence.

6.4.1 Autonomous Driving and Target Tracking

Autonomous driving is an area of great interest to automobile manufacturers and to the general public. Autonomous cars are driving the streets today but are not yet ready for general use by the public. There are many technologies involved in autonomous driving. These include

  1. Machine vision: turning camera data into information useful for the autonomous control system

  2. Sensing: using many technologies including vision, radar, and sound to sense the environment around the car

  3. Control: using algorithms to make the car go where it is supposed to go as determined by the navigation system

  4. Machine learning: using massive data from test cars to create databases of responses to situations

  5. GPS navigation: blending GPS measurements with sensing and vision to figure out where to go

  6. Communications/ad hoc networks: talking with other cars to help determine where they are and what they are doing

All of the areas overlap. Communications and ad hoc networks are used with GPS navigation to determine both absolute location (what street and address correspond to your location) and relative navigation (where you are with respect to other cars).

This example explores the problem of a car being passed by multiple cars and needing to compute tracks for each one. We are really addressing just the control and collision avoidance problem. A single-sensor version of track-oriented multiple-hypothesis testing is demonstrated for a single car on a two-lane road. The example includes MATLAB graphics that make it easier to understand the thinking of the algorithm. The demo assumes that the optical or radar preprocessing has been done and that each target is measured by a single “blip” in two dimensions. An automobile simulation is included. It involves cars passing the car that is doing the tracking. The passing cars use a passing control system that is in itself a form of machine intelligence.

This chapter uses a UKF for the estimation of the state. This is the underlying algorithm that propagates the state (that is, advances the state in time in a simulation) and adds measurements to the state. A Kalman filter, or other estimator, is the core of any target tracking system.

The section will also introduce graphics aids to help you understand the tracking decision process. When you implement a learning system, you want to make sure it is working the way you think it should, or understand why it is working the way it does.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.34.25