1
Zeroing Neural Networks for Control

1.1 Introduction

In addition to the remarkable features such as parallelism, distributed storage, and adaptive self‐learning capability, neural networks can be readily implemented by hardware, and have thus been applied widely in many fields [16]. The zeroing neural network (ZNN) as well as its variant (i.e. zeroing dynamic), as a systematic approach to the online solution of time‐varying problems with scalar situation included, has been applied to online matrix inversion [7], motion generation and control of redundant robot manipulators [3], and tracking control of nonlinear chaotic systems [8]. For example, a ZNN model with a nonlinear function activated is applied to the kinematic control of redundant robot manipulators via Jacobian matrix pseudoinversion in [9], which achieves high accuracy but cannot handle the bound constraints existing in the robots. In [10], present a finite‐time convergent ZNN model is presented for solving dynamic quadratic programs with application to robot tracking, which requires convex activation functions and cannot remedy the issue of joint‐limit avoidance. Such a ZNN method is further discretized to compute the solution to time‐varying nonlinear equations based on a new three‐step formula, which can be implemented on a digital computer directly. In addition, for the applications, ZNN is exploited in [3] to remedy the joint‐angle drift phenomenon of redundant robot manipulators by minimizing the difference between the desired joint position and the actual one.

It is worth pointing out here that although these existing models differ in choosing different error functions or using different activation functions, all of them follow similar design procedures: the ZNN method usually formulates a time‐varying problem into a regulation problem in control. Specifically, the residual error of a ZNN model for the task function to be solved is to be regulated to zero. Then, a monotonically increasing and odd function activated ZNN model with its equilibrium identical to the solution of this time‐varying problem is devised to solve the latter recursively. In addition, the design parameter in the ZNN method should be larger than zero. To the best of the authors' knowledge, all existing results on ZNN assume that the set for projection of the activation function is a convex one, which evidently excludes the nonconvex set from consideration. General conclusions relaxing the convex constraint on activation functions remain unexplored.

In this chapter, we make progress in this direction by proposing new results on ZNN to remedy these weaknesses. The presented ZNN models in this chapter are able to deal with nonconvex projection set images in the activation functions, while the existing solutions require the projection set to be convex. Additionally, this is the first work on ZNN for solving a time‐varying optimization problem with inequality and bound constraints, which opens a door to the research on solving time‐varying constrained optimization problems in an error‐free manner. In short, there are two limitations in the existing research on ZNN, i.e. lacking the technique for handling inequality and bound constraints when solving dynamic optimization problems and requiring the activation function to be odd and monotonically increasing. This chapter overcomes these limitations by proposing ZNN models, allowing nonconvex sets for projection operations in activation functions and incorporating new techniques for handling inequality constraint.

1.2 Scheme Formulation and ZNN Solutions

In this section, a ZNN model for dynamic quadratic programming subject to equality and inequality constraints is presented. Then, new results are derived using the ZNN model with the aid of a nonconvex activation function.

ZNN Model

Consider the convex dynamic quadratic programming subject to equality and inequality constraints in the form of

where superscript images denotes the transpose operation over a vector or a matrix; smoothly time‐varying matrix images is positive‐definite; images, images being of full‐row‐rank, images and images, images are all smoothly time‐varying.

By adding a time‐varying nonnegative term to the inequality constraint, the convex dynamic quadratic programming problem (1.1) is converted into

(1.2) equation

where images is defined as images. Define a Lagrange function as follows:

(1.3) equation

By using the related Karush–Kuhn–Tucker condition [11], we have

Letting images, the above equation can be rewritten as

where mapping function images is used to denote the left‐hand side of (1.4). By defining images, we adopt the following evolution for images, i.e. the ZNN design formula:

Substituting (1.5) into (1.6) yields a dynamic equation:

(1.7) equation

where

equation

with

equation

For the situation of Jacobian matrix images being nonsingular, the above equation is further rewritten as

where images, starting from a given initial condition, denotes the neural state as well as the output corresponding to theoretical solution images, with its first images elements constituting the optimal solution images to (1.1).

Table 1.1 Comparison of ZNN‐based and gradient‐based techniques for solving images.

  Error function Design formula Dynamic equation
ZNN‐based images images images
technique images   images
Gradient‐based images images images
technique images   images

In addition, it is revealed in [13] that a controller designed by the ZNN‐based technique is stable naturally as long as design parameter images, while a controller designed by other techniques cannot have a guaranteed stability. This can be deemed as another advantage of the ZNN‐based technique compared with other existing techniques.

A disadvantage of the ZNN‐based technique compared with other existing techniques is that, as shown in Table 1.1, the matrix inversion operation required in the model may lead to the failure of the solving task when encountering a singularity.

1.2.2 Nonconvex Function Activated ZNN Model

As reviewed in Section 1.1, different ZNN models as well as their variants have been extensively studied and exploited for solving dynamic problems over the past 15 years. Although these existing models differ in choosing different error functions or using different activation functions, all of them follow similar design procedures and share the same convergence condition. For example, the ZNN model is often designed as a dynamic system with its equilibrium identical to the solution of the problem to be solved and then solves the latter recursively. In addition, the design parameter images should be larger than zero and the activation function used to accelerate the convergence speed should be monotonically increasing and odd. To the best of the authors' knowledge, all existing results on ZNN assume that the set for projection of activation function is a convex one, which evidently excludes a nonconvex set from consideration. General conclusions relaxing the convex constraint on activation functions remain unexplored.

Let images be the projection from a set images to a set images such that images images with images, we show the new design formula as follows:

Expanding (1.11) leads to a new nonconvex function activated ZNN model:

It can be concluded from the definition of images that images incorporates the existing ZNN activation functions as special cases. That is to say, any monotonically increasing odd activation function images can be deemed as a subcase of images. In addition, it also can be generalized that, different from the existing results, the following special set can be used as the activation function of ZNN.

  • images, where images and images are two constants and images.

1.3 Theoretical Analyses

In this section, we conduct analyses on the convergence of the presented ZNN model (1.8) and (1.12) via the following theorems.

1.4 Computer Simulations and Verifications

In this section, the following dynamic quadratic programming problem subject to equality and inequality constraints is considered for illustration and for comparison, which is modified from the problem presented in [1]:

In order to investigate the performance of the presented ZNN models, we consider two examples in the following subsections with different bound constraints.

ZNN for Solving (1.13) at images

At images and with the bound constraint incorporated, (1.13) can be further rewritten as

(1.14) equation

Starting with a randomly generated initial state, the corresponding computer simulation results are shown in Figures 1.1 and 1.3. Specifically, the element trajectories of the state images, images, images and images are shown in Figures 1.1 and 1.2, from which we could observe that the solution of ZNN model (1.8) satisfies the given bound constraint. In addition, the corresponding residual error shown in Figure 1.3 further illustrates the effectiveness of the presented ZNN model (1.8).

c01f001

Figure 1.1 State vector images of ZNN model (1.8) for solving (1.13) at images. (a) Profiles of images and (b) profile of images.

c01f002

Figure 1.2 State vector images of ZNN model (1.8) for solving (1.13) at images. (a) Profiles of images and (b) profile of images.

c01f003

Figure 1.3 Residual error of ZNN model (1.8) for solving (1.13) at images.

It is revealed in Theorem 1.2 that the projection operation for activation functions could be nonconvex. In this section, to exemplify the choice of images, we particularly consider the following set:

with images and images in the simulation. The choice of images is nonconvex due to the fact that images and images but images. Physically, images defined in (1.15) is generalized from commonly used strategies in industrial bang‐bang control, where only maximum input action images, minus maximum input action images, and zero input action 0 are applicable. To avoid chattering phenomena in conventional bang‐bang control, it is preferable to expand zero input action into a small range images, which results in the definition of images in (1.15). As shown in Figures 1.41.6, the state vector images is kept within its bound and the residual error converges to zero as time evolves. The convergence of the residual error in Figure 1.6 validates the effectiveness of Theorem 1.2 for the nonconvex constraint on activation function.

c01f004

Figure 1.4 State vector images of ZNN model (1.8) for solving (1.13) at images. (a) Profiles of images and (b) profile of images.

c01f005

Figure 1.5 State vector images of ZNN model (1.8) for solving (1.13) at images. (a) Profiles of images and (b) profiles of images.

c01f006

Figure 1.6 Residual error of ZNN model (1.8) for solving (1.13) at images.

1.4.2 ZNN for Solving (1.13) with Different Bounds

With a new bound constraint incorporated, (1.13) can be further rewritten as

Starting with a randomly generated initial state, the corresponding computer simulation results are shown in Figures 1.71.9. Specifically, the element trajectories of the state images and images, and images and images are shown in Figures 1.7 and 1.8, respectively, from which we could observe that the solution of ZNN model (1.8) satisfies the given bound constraint. In addition, all the states vary as time evolves. Besides, the corresponding residual error shown in Figure 1.9 further illustrates the effectiveness of the presented ZNN model (1.8).

c01f007

Figure 1.7 State vector images of ZNN model (1.12) for solving (1.16). (a) Profiles of images and (b) profile of images.

c01f008

Figure 1.8 State vector images of ZNN model (1.12) for solving (1.16). (a) Profiles of images and (b) profiles of images.

c01f009

Figure 1.9 Residual error of ZNN model (1.12) for solving (1.16).

1.5 Summary

This chapter has pointed out two limitations in the existing ZNN results and then overcome these limitations by proposing ZNN models, allowing nonconvex sets for projection operations in activation functions and incorporating new techniques for handling the inequality constraint arising in optimizations. Theoretical analyses have been presented and shown that the presented ZNN models are of global stability with timely convergence. Finally, illustrative simulation examples have been provided and analyzed to substantiate the efficacy and superiority of the presented ZNN models for real‐time dynamic quadratic programming subject to equality and inequality constraints. This chapter can be deemed as a rudiment of further investigations on constrained dynamic optimization with time‐varying parameters, which can be generalized and employed for the motion planning and control of redundant robot manipulators [15] and distributed winner‐take‐all‐based task allocation of multiple robots [16].

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.59.173.242