5
Robust Neural Controller Design for Robot Arm Control

5.1 Introduction

In recent years, robotic manipulators have been widely employed in industry for labor‐intensive and high‐accuracy operations. The current shortage of skilled labor and the aging population pose urgent demands for robotic manipulation, e.g. payload transport, welding, painting, and assembly. Among the broad categories of manipulators, redundant manipulators, which have more control degrees of freedom (DOFs) than desired, are advantageous for dextrous manipulations due to their redundancy in DOFs. In comparison with traditional manipulators without extra DOFs, redundant manipulators usually have more than one control solution for a specific task, and thus allow designers to exploit this feature to fulfil additional requirements, such as obstacle avoidance and control optimization, and thus have received intensive research in recent years.

The introduction of extra DOFs in redundant manipulators increases the capability and flexibility of robotic manipulation, but also sets challenges for the control design for efficient redundancy resolution in real time. Analytically, the joint velocity of a nonredundant manipulator can be expressed in terms of the inverse of its Jacobian matrix. This result extends to redundant manipulators by replacing the Jacobian matrix inverse with its pseudo‐inverse, as the Jacobian matrix in this situation is nonsquare [25]. From an optimization perspective, this solution is equivalent to one that minimizes joint speeds. This type of method provides the foundation for manipulator kinematic redundancy but suffers from several drawbacks, e.g. intensive computation due to the continuous computation for pseudo‐inverse, outbound of computed joint velocities due to the lack of velocity range constraints, and local instability in some situations due to the singularity of the Jacobian matrix [15]. Later work employs various strategies to deal with the drawbacks of pseudo‐inverse‐based solutions, including the usage of fast matrix operation algorithms for numerical approximation [28] and iterative methods to approach the solution. However, time efficiency still remains a challenging problem for redundancy resolution since recomputation has to be conducted for every time step.

The parallel processing capability of neural networks inspires researchers to explore the use of neural networks, including feed‐forward neural networks [29, 31] and recurrent neural networks [31], to control redundant manipulators. Among them, recurrent neural network based solutions achieve great success by exploiting historical information for control update to save computation. In [32], the redundancy resolution problem is modeled as convex optimization under an equation constraint, and solved using a Lagrange neural network. However, this model can only deal with equation constraints. For inequality constraints, e.g. the boundedness of joint velocities, cannot be directly considered by Lagrange neural networks. Dual neural networks are presented to address this problem. In [38], this approach is extended to optimize the joint torque when resolving the redundancy. In [40], the authors propose a single layered dual neural network for the control of kinematically redundant manipulators for the reduction of network spatial complexity and the increase of computational efficiency. There exists an equivalence for redundancy resolutions between the velocity level and the acceleration level. By formulating the problem as constrained quadratic programming at the acceleration level, various dynamic neural solutions can be obtained to reach similar control performances to velocity level resolution results [16]. Not confined to the redundancy resolution of a single serial manipulator, the recurrent neural network approach is applied to kinematic redundancy resolution of parallel Stewart platforms in [58], the coordination of multi‐manipulators [3, 42]. Actually, all recurrent neural networks for solving general constrained optimization problems in principle can be applied to address redundancy resolution of manipulators [16].

Although great success has been achieved and various models have been presented for the redundancy resolution of manipulators using recurrent neural networks, most work mainly focuses on the nominal design without considering the appearance of noises explicitly. Actually, in the implementation of neural controllers for manipulator control, the presence of noises is inevitable in the forms of truncation error, rounding error, model uncertainty, and external disturbance. The robustness against additive noises is an important factor in designing reliable neural controllers. However, it remains unexplored for the design of dual neural networks inherently tolerant to noises. This chapter makes progress by proposing a novel recurrent neural network design that is immune to any time‐varying noises in the form of polynomials. Based on the feature that polynomial noises become zero after passing through time derivation operations enough times, a neural dynamics system is constructed to learn the noise and counter its impact. Accordingly, we are able to build an improved dual neural network with guaranteed convergence in noisy environments.

5.2 Problem Formulation

In this section, we present the kinematic model of manipulators and formulate the redundancy resolution problem as a control one.

For a k‐DOF redundant manipulator with the joint angle images, its Cartesian coordinate images in the workspace can be described as a nonlinear mapping:

For a redundant manipulator, the joint space dimension images is greater than the workspace dimension images, i.e. images. Equation (5.1) becomes as follows after computing the time derivative on both sides:

where images is called the Jacobian matrix of images, images is the end‐effector velocity in the workspace, and images is the manipulator joint angular velocity in the joint space. Equation (5.2) describes the kinematics of the manipulator in the velocity level. In comparison with the position level description (5.1), the velocity level descriptions is easier to deal with since images is affine to images in (5.2). In applications, the joint angle usually is driven by acceleration or equivalently by force generated by motors, described as

where images is the acceleration as the control input and, images is an additive noise in the control channel, e.g. the disturbance from electrical and mechanical vibrations. Due to physical constraints, the angular velocity of a manipulator cannot exceed certain limits. Let images denote the feasible angular velocity set, then the physical constraint can be expressed as

With (5.4) and (5.2), we are ready to define the kinematic redundancy resolution problem of a manipulator as in the following.

5.3 Dual Neural Networks for the Nominal System

In this section, we present the design of a dual neural network for the control of a nominal manipulator without the presence of noises. This will pave the way for our further consideration with noises taken into account.

5.3.1 Neural Network Design

Due to the redundancy of the manipulator, there is more than one control design that can solve the redundancy resolution. To exploit the redundancy in an optimal fashion, we first define the problem as an optimization one and then convert it into a control law to solve the problem in a recurrent way.

To exploit the extra design freedom, we find a control action to minimize the following quadratic cost function by following conventions for dual neural network design:

where images is a symmetric positive semi‐definite matrix images, images is a vector. This cost function can be utilized to characterize kinematic energy expenses for angular velocity images. For example, choosing images as an identity matrix and images, this cost function is the kinematic energy. Generally, this cost function describes biased and weighted kinematic energy. The optimal solution to (5.5) also needs to meet the kinematic model (5.2) of the manipulator such that the workspace velocity images equals the desired velocity images ultimately, and also the feasibility constraint (5.4). Overall, we formulate it as the following constrained optimization:

(5.6b) equation
(5.6c) equation

Conventionally, a dual neural network is designed based on a Lagrange function as images. In this chapter, we augment such a Lagrange function with the equality constraint, which benefits the convergence of the resulting neural dynamics, and define it as follows:

where images is a symmetric positive semi‐definite weighting matrix images. The optimal solution to (5.6) is identical to the saddle point of the following problem:

According to the Karush–Kuhn–Tucker condition [59], the solution of (5.8) satisfies

where images denotes the normal cone of set images at images. According to the definition of normal cone, we have

(5.10) equation

with which, the first expression in (5.9) can be written as

With (5.11), we further conclude

This implies

where images is the projection operator defined as

(5.14) equation

or equivalently, images is the one satisfying

(5.15) equation

The derivation of (5.13) from (5.12) directly follows by setting new variables images, images and images in (5.12). Overall, (5.9) can be finally expressed as

(5.16) equation

This is a nonlinear equation set and usually is impossible to solve analytically. To solve it numerically, we propose the following recurrent neural network in the form of

(5.17) equation

This amounts to the following control law

in the absence of noises, i.e. images.

5.3.2 Convergence Analysis

About the presented neural network model, we have the following theoretical conclusion.

5.4 Neural Design in the Presence of Noises

In the last section, we have designed a neural network to deal with kinematic redundancy resolution in a nominal situation without noises, i.e. images in (5.3). In real applications, noise is inevitable and it is important to design a noise tolerant controller for effective control. In this section, we present the design of neural dynamics that takes noises into account.

5.4.1 Polynomial Noises

In this section, we consider the situation with polynomial noises. According to Taylor expansion, it is possible to represent any time varying noises by a polynomial for a desired accuracy. In this sense, a polynomial noise can be regarded as a general representation of noises.

5.4.1.1 Neural Dynamics

We first present the neural dynamics with the capability to deal with polynomial noises. For an images‐order polynomial noise images with images for images and images unknown, we present the neural controller as follows:

About this neural controller, we have the following theoretical results.

About the choice of parameters images for images in the neural controller, we have the following remark.

About the generality of the presented neural controller in coping with any type of time‐varying noises, we have the following remark.

The presented method in this chapter is applicable to various real problems. On this point, we have the following remark.

5.4.1.2 Practical Considerations

In real applications, it is necessary to determine a proper order for the polynomial noise before implementing the neural controller. For a particular problem, a trial‐and‐error procedure can be used to determine its value. Specifically, we can first assume the polynomial order is images, i.e. a constant noise, and design neural controllers accordingly. With the devised neural controller, we can measure the control performance in terms of the tracking error. If it is not satisfactory, we can increase the polynomial order to images, and check the tracking error again. Otherwise, we just stop iteration and use the current neural controller for control. If images cannot return with a satisfactory performance, we proceed to try images. Recursively, we can find a proper order of the polynomial, and the corresponding neural controller, with which the control error is satisfactory. Note that we can always improve the tracking accuracy by increasing the order images as implied by Taylor's theorem on the approximation power of polynomials.

In addition, we present the architecture of the presented neural network (Figure 5.1) to show its structure from a connectionist perspective. It is clear from Equation (5.28) and Figure 5.1 that the neural model contains hidden states images, images, images, images, …, images, images, images, …, images and an explicit output images. The hidden states interact with each other in a dynamic way and form the recurrent layer of this neural network. The recurrent layer then impacts the output images with a static mapping and forms the output layer. The dependency of variables and the relationship between them as described by (5.28) are shown in Figure 5.1. From this figure, it can also be observed that there are images variables in the recurrent layer, i.e. images, images, images, images, …, images, images, images, …, images. As to images, images impacts images, images impacts images, …, and images impacts images, in a cascaded way. The hidden variable images dynamically depends on itself and the two inputs images and images. As to images, the dynamics of images depends on images, images and images, the dynamics of images only depends on images, the dynamics of images only depends on images, …, and the dynamics of images only depends on images. All variables in the recurrent layer then impact the output images in a nonlinear way.

c05f001

Figure 5.1 The architecture of the presented recurrent neural networks.

Furthermore, the implementation of the interaction between the neural controller and the manipulator is also very important in real implementations. The manipulator is driven by the acceleration images of joint motors. With the actuation of all individual joints, the end‐effector moves in its workspace and the control objective is to reach the desired velocity images in the workspace. For the neural controller in the presence of polynomial noises, the input to this neural controller includes two quantities: one is images, which is the angular velocity of joints and comes from the feedback provided by sensors mounted on the manipulator; and the other one is the reference velocity images, which comes from the control command. As a summary, we provide the control diagram in Figure 5.2 to describe the interconnection between the presented neural controller and the manipulator.

c05f002

Figure 5.2 The control diagram of the system using the presented neural controller to steer the motion of a redundant manipulator.

5.4.2 Special Cases

In this section, we examine two special cases of polynomial noises, namely constant noises and linearly time‐varying noises, to show the specific presentation of the neural controllers.

5.4.2.1 Constant Noises

We first consider the situation with a constant noise. In practice, this category of noises can be employed to describe uncompensated weights, manipulator parameter errors, etc.

Choosing images in (5.28), the neural controller falls into the special case with a constant noise, as represented in the following form

In comparison with the neural controller (5.18) in the absence of noises, two additional terms are inserted into the control law: the term images, where images is the integration of the images that measures the violation of the solution's optimality in quantity, and the term images, which provides an extra damping to stabilize the system.

About the neural controller (5.39), we have the following corollary.

5.4.2.2 Linear Noises

Another important special case of polynomial noises is the one linearly varying with time. It models various physical processes, e.g. aging and fatigue. The linearly varying noise images can be represented as images where images and images are unknown constants and images represents time. The linear noise images carries two parts: the constant part images; and the time proportional part images. Choosing images in (5.28), the neural controller falls into this special case with its neural controller expressed as:

where images and images are constants. As stated in Corollary 5.1, the neural model (5.39) is able to deal with the constant noise part images. For the additional part images, two extra terms are introduced to form the new neural controller: images and images. The first term is the twice integration of the control error images introduced to penalize the control error resulting from the additive noise images. The second term is the integration of images introduced as a damping term for stabilization consideration. Notice that the newly introduced terms are both in integration format, due to the fact that the time‐varying noise images approaches infinity with the increase of time images and an increased amount of control action is desired to conquer its impact.

Table 5.1 Summary of the Denavit–Hartenberg parameters of the PUMA images manipulator used in the simulation.

Link a(m) α(rad) d(m)
1 0 images 0
2 0.43180 0 0
3 0.02030 images 0.15005
4 0 images 0.43180
5 0 images 0
6 0 0 0.20000

About the presented neural controller (5.40), we have the following convergence result in the presence of linear noises.

5.5 Simulations

In this section, we apply the presented neural controller to the redundancy resolution of a PUMA 560 robot arm for the tracking of a circular motion. We compare the presented method with existing dynamic neural solutions in the presence of various noises and show its robustness in dealing with additive noises.

5.5.1 Simulation Setup

In the simulation, we consider a PUMA 560 robot arm, with its Denavit–Hartenberg parameters summarized in Table 5.1, as the manipulator platform for the tracking of a circular motion with its end‐effector. PUMA 560 is a serial manipulator with six joints and thus has 6 DOFs for control. As the control task only refers to the position tracking, it forms a three‐dimensional workspace. For this task, PUMA 560 serves as a redundant manipulator. The desired motion is to track a circle in the images plane with radius 0.15 m at an angular speed of 0.2 rad/s. In the simulation, we consider noises up to fourth order of polynomials with different amplitude. For parameters in the presented neural network, we choose images, images, images, and images. As to the parameter images, it is determined by Equation (5.38) by choosing images. The simulation results are compared with existing neural solutions presented in [39] and [60] by setting the same parameters.

5.5.2 Nominal Situation

In this section, we present simulation results in the nominal situation without the presence of noises to show the convergence of the presented neural controller. Without losing generality, we consider the situation with images. Recall that we have chosen images and we have images from (5.38). Figures 5.3 and 5.4 show the result of a typical simulation run for 50 s. As shown in Figure 5.3, the generated trajectory for the PUMA 560 manipulator matches the desired circular motion. Quantitatively, the position tracking error in the workspace along images, and images and images directions are drawn in Figure 5.4a. As observed from this figure, the position tracking error remains at a low level (lower than images in absolute values for all dimensions). The generated joint angles at different times are shown in Figure 5.4b.

c05f003

Figure 5.3 The schematic of a 6‐DOF PUMA 560 manipulator considered in the simulation and the generated trajectory of PUMA 560 controlled by the presented neural network in the absence of noises. (a) PUMA 560 manipulator and (b) end‐effector trajectory.

c05f004

Figure 5.4 The time profile of control parameters for the presented neural network in the absence of noises. (a) Position tracking error and (b) joint angles.

5.5.3 Constant Noises

In this simulation, we set the noise as a constant, and compare the performance obtained by using different neural controllers. Note that the noise goes into the control channel in the form of (5.3) and the control input images is amplified by images. Therefore, we consider a set of noise comparable with the control input as images, images, and images. We compare the results generated by the presented algorithm in this chapter with two existing neural controllers presented in [60] and [39] with the same parameter setup. As shown in Figures 5.55.7, with the presence of the constant noise images, the position tracking error for the presented method is restricted to be lower than images in the absolute value across the entire simulation run. In contrast, the dual neural method presented in [60] has a larger tracking error with a peak value around images at time images s and images s (Figure 5.6). As to the dual neural method presented in [39], the system loses stability and fluctuates severely with time. With the increase of the noise to images, the position tracking error of the method presented in [60] increases a lot and reaches a peak value 0.4 at time images s, and the system controlled by the neural law presented in [39] demonstrates enhanced fluctuation with an increased frequency. Note that due to the restriction introduced by the physical size of the manipulator, the position tracking error is always bounded. For the case with noise images, in contrast to the severe performance degradation of existing methods [39, 60], the presented one still performs well with a similar noise level as in the situation without the presence of noise. For even larger noise images, the method presented in [60] also loses stability and starts to oscillate with a big error amplitude as observed in Figure 5.7. As to the presented method shown in Figure 5.7, after a short period for transition, the position tracking error returns to a small value below images very soon. Note that the short transition shown in the left inset figure in Figure 5.7a is due to the fact that the large constant noise images acts on the system at the beginning of this simulation run while it takes a short transient time for the presented control to adapt to this noise. Overall, the simulation reveals that existing solutions designed in nominal situations are not robust against additive noises and the presented neural controller with an explicit consideration of noise tolerance receives a significant performance improvement over existing ones in dealing with unknown constant noises.

c05f005

Figure 5.5 Comparisons of the presented approach with existing ones in the presence of different levels of noises with noise images. (a) This chapter; (b) [60]; and (c) [39].

c05f006

Figure 5.6 Comparisons of the presented approach with existing ones in the presence of different levels of noises with noise images. (a) This chapter; (b) [60]; and (c) [39].

c05f007

Figure 5.7 Comparisons of the presented approach with existing ones in the presence of different levels of noises with noise images. (a) This chapter; (b) [60]; and (c) [39].

5.5.4 Time‐Varying Polynomial Noises

In this simulation, we consider a set of polynomial noises to verify the effectiveness of the presented neural controller. Particularly, we consider a linear time‐varying noise images, a quadratically time‐varying one images, a cubically time‐varying one images, a fourth‐order time‐varying one images, and one with all the above ingredients plus a constant, i.e. images. Notice that the noise signals considered in this section are all time‐varying and their values could be very large. For example, at the end of the simulation images s, the linear time‐varying noise becomes images, which is 50 times larger than the largest constant noise considered in the last section. For existing methods [39, 60], as demonstrated in Section 5.5.3, they cannot handle such large noises. In the presence of the above mentioned noises, the position tracking error remains at a low level for the linear noise, the quadratic noise and the cubic noise as shown in Figures 5.8a, b, and c, respectively. Although it takes a short transition for neural adaption with the increase of noises, as seen in Figure 5.8d and e, the presented neural network is able to reach a small tracking error no greater than images ultimately, which is comparable with the situation without the presence of noises. To further show the improved performance with the presented neural network in the presence of polynomial noises, we additionally consider the track of a square path in the horizontal plane with the size of each edge being 0.3 m. With the same setup of neural controller parameters as the situation to track a circular motion, we run one more set of simulation.

c05f008

Figure 5.8 Performance comparisons of the presented approach for the tracking of circular motions in the presence of various noises. (a) Linear noise images; (b)quadratic noise images; (c) cubic noise images; (d) fourth‐order noise images; and (e) noise images images.

5.6 Summary

In this chapter, we consider the recurrent neural network design for redundancy resolution of manipulators in the presence of polynomial noises. A novel neural control law is presented to tackle this problem. Theorems are established to show the convergence of the neural controller with noisy inputs. Extensive simulations verify the theoretical conclusions. Numerical comparisons with existing neural controllers show significant performance improvements in control accuracy.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.254.90