3
Projection Neural Networks for Robot Arm Control

3.1 Introduction

With advances in mechanics, electronics, and computer engineering, automatic manipulators are becoming increasingly popular in industrial applications to reduce the burden on labor forces. Among the various types of available manipulators, redundant manipulators, which usually possess more degrees of freedom than general manipulators and thus offer increased control flexibility for complicated tasks, have attracted intensive research in recent decades.

Despite the great advantages offered by redundant manipulators in dexterous manipulation for complicated tasks, the efficient control of such manipulators remains a challenging problem. A redundant manipulator provides a nonlinear mapping from its joint space to a Cartesian workspace. The goal of kinematic control is to find a control action in the joint space that produces a desired motion in the workspace. However, the nonlinearity of the mapping makes it difficult to directly solve this problem at the angle level. Instead, in most approaches, the problem is first converted into a problem at the velocity or acceleration level, and solutions are then sought in the converted space. In early work [25], control solutions were directly found using the pseudoinverse of the Jacobian matrix of a manipulator. However, this control strategy suffers from an intensive computational burden because of the need to perform the pseudoinversion of matrices continuously over time. It also suffers from local instability problems [26].

To overcome the problems with pseudoinverse‐based solutions, later work [27] formulated the control problem as a quadratic programming problem to find an optimal solution with minimal kinematic energy consumption under physical constraints. Because these physical constraints typically include inequality constraints, the solution to such a problem usually cannot be obtained analytically, and serial processing techniques, e.g. matrix decomposition and Gaussian elimination, have been applied to these problems to obtain numerical solutions [28]. However, the relatively low efficiency of conventional serial processing techniques poses an additional hindrance to the real‐time control of manipulators.

The parallel processing capability of neural networks has inspired researchers to explore the use of neural networks to control redundant manipulators. In [29], a neural network was constructed to learn the forward kinematics of a redundant manipulator and subsequently utilized to control such a system based on the learned model. In [30], a feed‐forward neural network was employed to learn the parametric uncertainties in the dynamical model of a robot manipulator. In [31], an approximately optimal resolution scheme was designed using an adaptive critic framework for the kinematic control of a redundant manipulator.

In addition to the use of feed‐forward neural networks to control redundant manipulators, the application of recurrent neural networks (RNNs) for manipulator control has also received intensive research attention. In [32], the authors presented a Lagrange neural network model to minimize a cost function under an equation constraint. This model is able to address constrained quadratic programming problems by converting inequality constraints into equations using slack variables, and thus, this model is applicable for the kinematic control of redundant manipulators following its quadratic programming formulation. For the direct consideration of inequalities in constrained quadratic programming problems, researchers have attempted to consider the problem in its dual space. Various dual neural networks with different architectures have been presented. Typical works include [33, 35], in which analyses were conducted in the dual space and a convex projection function was often employed to represent inequality constraints. Methods of this type are highly efficient for real‐time processing and have been successfully used in various applications, including k‐winners‐take‐all problem solving [36] and image restoration [37]. Because of their generality in coping with constrained quadratic programming problems, they are potentially applicable for redundant manipulator control. In [38], for joint torque optimization, a unified quadratic‐programming‐based dynamical system approach was established for the control of redundant manipulators. The authors of [39] used a minimum‐energy‐based control objective to construct RNNs for the control of redundant manipulators. They also identified periodic oscillations in several neural networks when applied for manipulator control. A single‐layered dual neural network was presented in [40] for the control of kinematically redundant manipulators. The presented neural structure offers reduced spatial complexity of the network and increased computational efficiency. An adaptive RNN with guaranteed asymptotic convergence was presented in [41] and successfully employed for the control of a mobile manipulator with unknown dynamics. In [42, 43], RNN approaches were extended to achieve multiple manipulator cooperation and coordination.

Although great success has been achieved in the kinematic control of redundant manipulators by using RNN approaches, the existing solutions still have certain unsatisfactory aspects that severely restrict the wide application of RNNs in industry for redundant manipulator control. This chapter identifies two particular limitations of existing RNN control solutions. The first limitation is that the position control error of existing solutions accumulates over time, which prohibits such solutions from being used in applications involving tasks that must run for a long time in the presence of additive noise. The second limitation is that in existing methods, the projection set is assumed to be convex, which eliminates many real projection operations that are utilized in industry from consideration. For instance, the bang‐bang controllers that are commonly utilized in many practical control systems cannot be characterized in terms of any convex projection operators [44] and thus cannot be implemented using existing RNN methods. The possibility of modifying RNNs to allow nonconvex projection sets while guaranteeing stable manipulator control remains unexplored. This chapter makes progress on this front through the proposal of modified neural networks to overcome the two above mentioned limitations of existing neural approaches. To the best of our knowledge, this is the first neural network model for redundant manipulator control that is subject to neither position error accumulation nor the restriction that the projection set must be convex.

3.2 Problem Formulation

The forward kinematics of a manipulator involves a nonlinear transformation from a joint space to a Cartesian workspace, as described by

where images is an images‐dimensional vector in the workspace that describes the position and orientation of the end‐effector at time images; images is an images‐dimensional vector in the joint space, each element of which describes a joint angle; and images is a nonlinear mapping determined by the structure and parameters of the manipulator. Because of the nonlinearity and redundancy of the mapping images, it is usually difficult to directly obtain the corresponding images for a desired images. By contrast, the mapping from the joint space to the workspace at the velocity level is an affine mapping and thus can be used to significantly simplify the problem, which can be illustrated as described in the following. Computing the time derivative on both sides of (3.1) yields

(3.2) equation

where images is the Jacobian matrix of images, and images and images are the Cartesian velocity and the joint velocity, respectively. Usually, the motion control of a manipulator can be partitioned into two loops: one is an external loop for manipulator workspace tracking control; and the other is an internal loop for motor speed control. A practical servo motor with dedicated speed controllers is able to swiftly reach a reference speed. Under the condition that the time scale of the internal control loop is much shorter than that of the external loop, i.e. the internal loop reaches its reference speed much faster than the stabilization of the external loop control, the transition to a steady state in the internal loop can be neglected and the external loop control can be designed without direct consideration of the internal loop dynamics. In this chapter, we assume that the internal motor speed control loop is sufficiently fast in comparison with the external loop dynamics and we focus on the design of the external control loop. Each element of images, which is the angular speed of a particular joint of the manipulator, serves as an input to the external loop. Define an images‐dimensional input vector images. Then, the manipulator kinematics can be rewritten as follows:

Because of the physical limitations of motors, the angular speed of each joint is limited to a certain bounded range. For example, the constraint images applies for the case in which each individual joint is restricted to a maximum speed of images. To capture the restrictions on images, we set the following general constraint:

where images is a set in an images‐dimensional space. For a redundant manipulator described by (3.3) that is subject to the control input constraint (3.4), we wish to find a control law images such that the tracking error images for a given reference trajectory images converges over time.

3.3 A Modified Controller without Error Accumulation

In this section, we first examine an existing RNN designed for the control of redundant manipulators. Then, we modify this existing RNN to address the error accumulation problem for position control. Finally, we prove the stability of the presented controller using Lyapunov theory.

3.3.1 Existing RNN Solutions

As reviewed in Section 3.1, the kinematic control of redundant manipulators using RNNs has been extensively studied in recent decades. Although existing methods of this type differ in the objective functions or neural dynamics used, most of them follow similar design principles: the redundant manipulator control problem is typically formulated as a constrained quadratic optimization problem, which can be equivalently converted into a set of implicit equations. Then, a convergent RNN model, the equilibrium of which is identical to the solution of this implicit equation set, is devised to solve the problem recursively. Without loss of generality, the corresponding optimization problem, with the joint space kinematic energy images as the objective function (where the superscript images denotes the transpose of a vector or matrix) and with a set constraint on the joint velocity images, can be presented as follows:

where images is a set. In most of the existing literature [3840, 45, 46], the set images is chosen to be images to capture the physical constraints on the joint speeds. With the selection of a Lagrange function images, where images is the Lagrange multiplier corresponding to the equality constraint (3.6), the optimal solution to (3.5)(3.7) is equivalent to the solution of the following equation set, according to the so‐called Karush–Kuhn–Tucker condition [11]:

(3.8b) equation

A projected RNN for solving (3.8) can be designed as follows:

where images is a scaling factor and images is a projection operation to a set images, which is defined as images. The dynamics of (3.9) has been rigorously proven to converge to an equilibrium that is identical to the optimal solution of (3.5) [39, 40, 47].

The objective function in (3.5) can include an additional term images to increase the penalty on the velocity‐level tracking error images; in this case, the overall optimization is reformulated as follows:

(3.11) equation

where images is a weight. Note that the solution to (3.10)(3.12) is identical to that of (3.5)(3.7) because the additional term images is equal to zero because of the requirement that images in the formulation. For (3.10)(3.12), the Lagrange function is images, and a generalized projected RNN can be finally obtained as follows:

(3.13b) equation

Although the RNN model expressed in (3.13) generalizes that of (3.9) by introducing an additional design degree of freedom, they both have identical equilibrium points, achieve redundancy resolution at the velocity level, and are subject to certain limitations, as discussed in detail in the next subsection. Without loss of generality, we proceed with our discussion based on the basic RNN model (3.9).

To summarize the discussion presented in this section, conventional RNN‐based manipulator control involves three steps: (1) model the manipulator control problem as a constrained optimization problem [see Equations (3.5)(3.7)]; (2) investigate the optimization problem in the dual space and find the expression of its solution in the form of a set of nonlinear equations [see Equation (3.9)]; and (3) devise a convergent neural dynamics whose steady‐state solution is identical to that of the nonlinear equation set [see Equation (3.9)].

Note that the Jacobian matrix images in the optimization problem defined by (3.5)(3.7) varies with time because of the movement of the manipulator. Accordingly, the optimal solution to this optimization problem, which is the desired control action, also varies with time. The continuously time‐varying property of the Jacobian matrix implies that the desired control action also varies continuously with time. This fact further implies that there may be no need to recompute the control action every time; instead, it may be possible to evolve each real‐time control action from its predecessor, thereby reducing the necessary computation. In other words, the historical data regarding the control actions can be leveraged for rapid computations for real‐time control. For example, the RNN in (3.13) computes the difference between two consecutive control actions, thereby significantly reducing computational costs.

3.3.2 Limitations of Existing RNN Solutions

In this subsection, we discuss two limitations of existing RNN solutions: the accumulation of position control error in the workspace over time; and the fact that the projection operations considered in existing RNNs do not admit nonconvex sets.

We first show that (3.9) is subject to drift in the workspace in the tracking of images. Although it can be proven that images converges to zero, the error images of (3.9) accumulates over time when input noise is considered. To illustrate this, we first express images in terms of images according to (3.9b) as follows:

where images represents the value of images at time images, images, and images is the desired coordinate of the end‐effector in the workspace at images. Substituting (3.14) into (3.9a) yields

For images in (3.3), it is found that

(3.16) equation

where images is the workspace coordinate of the end‐effector at time images. Accordingly, the control input images in (3.15) can be rewritten as

Regarding the control law expressed in (3.17), we offer the following remark.

Additionally, regarding the set images in (3.4), we offer the following remark.

To summarize the above remarks, we identify the following limitations of existing RNN solutions for the kinematic control of redundant manipulators, which we wish to address in this chapter:

  1. Error Accumulation: The position tracking error images in the kinematic control of a redundant manipulator described by (3.3) using the control law (3.9) accumulates over time.
  2. Convexity Restriction: In real applications, the set images may not be convex. With a nonconvex images, the control law expressed in (3.3) may lead to instability of a redundant manipulator.

In the following subsection, we propose a modified control law based on (3.17) to overcome the above limitations of the existing solutions.

3.3.3 The Presented Algorithm

To maintain the effectiveness of (3.9) for manipulator control, in this section, we modify the control law by retaining the negative feedback in (3.9) for control stability while introducing new elements to overcome the two limitations identified in Section 3.3.2.

To overcome the error accumulation limitation regarding position control, we first remove the terms images and images from the control law expressed in (3.17), resulting in

To avoid the transient violation of the constraint images, we simply replace (3.18) with its steady‐state value, which satisfies this constraint. As a result, the presented control law is expressed as follows:

Equation (3.18) can be regarded as an alternative form of the presented control law (3.19) obtained by passing it through a first‐order low‐pass filter images, where images. The low‐pass filtering property of the control law (3.18) allows it to generate smoother control actions than those produced by (3.19). This property is useful in the practical implementation of the presented control law to avoid sharp changes in the control actions. However, the set constraint images cannot be guaranteed to hold in transient states for (3.18). By contrast, images holds unconditionally for (3.19).

With regard to the two limitations discussed in Section 3.3.2, we offer the following remark.

From the perspective of neural networks, we offer the following remark.

3.3.4 Stability

In this section, we present a theorem and a theoretical proof regarding the stability of the presented control law (3.19).

Regarding the requirement of images in Theorem 3.1, we offer the following remark.

Regarding the computational efficiency of the presented control law (3.19), we offer the following remark.

3.4 Performance Improvement Using Velocity Compensation

In the previous section, we considered the regulation of images to a constant desired value images in the workspace for the kinematic control of redundant manipulators. In this section, we extend the presented algorithm to dynamic tracking by applying velocity compensation.

3.4.1 A Control Law with Velocity Compensation

As derived in the proof of Theorem 3.1, the overall dynamics of the system when the control law expressed in (3.19) is used can be written as

To illustrate the limitations of (3.19) when it is used to track a dynamic signal images that is not constant with time, we consider the case in which images. In this case, the right‐hand side of (3.38) is zero, which cannot support the variation of images to follow the changes in images with time. To address this issue, we introduce an additional term into (3.19) as follows:

where images is used to compensate to satisfy the velocity requirement. To ensure that the desired velocity can also be reached in the steady state, we impose the requirement that images. Because of the redundancy of the manipulator, the solution for such a images is not unique. We choose the solution with the minimum consumption of kinematic energy, i.e. the images that minimizes images. The solution for images can thus be readily obtained as images, where images is the pseudoinverse of images. Note that the expression for images involves an inverse operation, which is computationally intensive. To avoid the direct computation of the inverse of a matrix, we design the following dynamics to recursively approach images:

(3.40) equation

where images is a scaling factor and images is a co‐state. By considering this dynamics together with (3.39), the proposed control law with velocity compensation can be written as follows:

Regarding the control law expressed in (3.41), we offer the following remark.

From the perspective of neural networks, we offer the following remark.

3.4.2 Stability

In this subsection, we present a stability analysis of the control law expressed in (3.41) for the case in which the desired workspace coordinate is varying with time.

Regarding the computational efficiency of the presented control law (3.41), we offer the following remark.

Additionally, regarding the impact of the internal loop dynamics, we offer the following remark.

Table 3.1 Summary of the Denavit–Hartenberg parameters of the PUMA 560 manipulator used in the simulation.

Link images (m) images (rad) images (m)
1 0 images 0.67
2 0.4318 0 0
3 0.4318 images 0.15005
4 0 images 0
5 0 images 0
6 0 0 0.2

3.5 Simulations

In this section, we consider numerical simulations of a PUMA 560 manipulator with the parameters summarized in Table 3.1 to demonstrate the effectiveness of the presented algorithms. The PUMA 560 is an articulated manipulator with six independently controlled joints. Its end‐effector can reach any position in its workspace at any orientation. In these simulations, we considered only the three‐dimensional position control of the end‐effector, and thus, the PUMA 560 served as a redundant manipulator for this particular task.

3.5.1 Regulation to a Fixed Position

We first performed simulations to verify Theorem 3.1 by using the control law expressed in (3.19) to drive the manipulator's end‐effector to a fixed position of images m in Cartesian space. For the scaling coefficient images and the set images, a typical simulation run generated with a random initialization is shown in Figure 3.1. After a short transient state, the joint angle images converges to a constant value, as shown in Figure 3.1a, and correspondingly, the end‐effector position images reaches a constant value in the workspace, as shown in Figure 3.1b. The control error images, where images represents the reference position in the workspace, approaches zero over time, as shown in Figure 3.2a. Notably, the control input images shown in Figure 3.2b remains within the set images at all times. It saturates at the beginning of the simulation when the error is large and converges to zero with the attenuation of the control error. Overall, as shown in Figure 3.1a, the end‐effector of the PUMA 560 successfully reaches the reference position under the simulated control scheme.

c03f001

Figure 3.1 Simulation results for the position regulation control of the end‐effector of a PUMA 560 to maintain a fixed position of images m in the workspace. (a) The end‐effector trajectory and (b) the time history of the joint angle images.

c03f002

Figure 3.2 Simulation results for the position regulation control of the end‐effector of a PUMA 560 to maintain a fixed position of images m in the workspace. Time history of (a) the control error images and (b) the control action images.

3.5.2 Tracking of Time‐Varying References

In this subsection, we consider the case of a time‐varying reference position to verify Theorem 3.2. The reference position of the PUMA 560's end‐effector moves at an angular speed of 0.2 rad/s along a circle centered at images m with a radius of 0.3 m and a revolution angle of images around the images axis. For a control action scaling factor of images, a co‐state scaling factor of images, and images, a typica simulation run using the control law expressed in (3.41) is presented in Figures 3.3 and 3.4. The joint angle images varies with time, as seen in Figure 3.3b. The control error images, where images represents the circular reference motion in the workspace, approaches zero over time after a short transient state, as shown in Figure 3.4a. The control input images shown in Figure 3.4b remains within the set images at all times, with a saturation at the beginning when the error is large. Notably, the control action images does not converge to zero, as is required to compensate for the variation in the reference position with time. The fluctuations in images shown in the second inset figure in Figure 3.4b provide this time‐varying compensation. Overall, as shown in Figure 3.3a, the end‐effector of the PUMA 560 successfully tracks the time‐varying reference motion after starting from a random initial configuration.

c03f003

Figure 3.3 Simulation results for the tracking control of the end‐effector of a PUMA 560 with respect to a time‐varying reference along a circular path. (a) The end‐effector trajectory and (b) the time history of the joint angle images.

c03f004

Figure 3.4 Simulation results for the tracking control of the end‐effector of a PUMA 560 with respect to a time‐varying reference along a circular path. Time history of (a) the control error images and (b) the control action images.

As a supplement to the theoretical justification that the set images is allowed to be non‐convex in Theorems 3.1 and 3.2, in this subsection, we numerically show that nonconvex sets can be chosen as the projection set images for the presented control laws (3.19) and (3.41) without loss of stability. To exemplify the choice of images, we considered the following set in the simulation:

with images and images. Note that this choice of images is nonconvex because images and images but images. Physically, the images defined in (3.52) is generalized from commonly used strategies in industrial bang‐bang control, in which only the maximum input action images, the negative of the maximum input action images, and a zero input action 0 are valid. To avoid chattering phenomena in conventional bang‐bang control, it is preferable to expand the zero input action to a small range images, thus leading to the definition of images given in (3.52). We simulated the use of (3.19) for regulation to a fixed position with the same parameter configuration as in Section 3.5.1 to verify Theorem 3.1 and the use of (3.41) for the dynamic tracking of a time‐varying trajectory with the same parameter configuration as in Section 3.5.2 to verify Theorem 3.2. As shown in Figures 3.5a and 3.6a, the control error converges over time in both cases. In Figures 3.5b and 3.6b, the control actions are either equal to images or within the small range images, which demonstrates the compliance of the control action with the nonconvex set images in (3.52). In both Figures 3.5b and 3.6b, because of the relatively large initial control error, the control actions are as large as images. As time elapses, the control actions reduce to the range of images after 2 s for Figures 3.5b and after 4 s for Figure 3.6b, and they subsequently remain in this range. The convergence of the control error in Figure 3.6 confirms the effectiveness of Theorems 3.1 and 3.2 for nonconvex projection sets.

c03f005

Figure 3.5 Simulation results obtained using the presented control laws with a nonconvex projection set. Time history of (a) the control error images with (3.19) and (b) the control action images with (3.19).

c03f006

Figure 3.6 Simulation results obtained using the presented control laws with a nonconvex projection set. Time history of (a) the control error images with (3.41) and (b) the control action images with (3.41).

3.5.3 Comparisons

In this subsection, we compare the performance of the presented control laws with that of existing RNN solutions for the tracking control of redundant manipulators with time‐varying references.

We compare the presented controllers, i.e. controller (3.19) and controller (3.41), with existing controllers based on dynamic neural networks [3840, 45] for the control of redundant manipulators. The controllers presented in [38, 45] extend the results of [39, 40] regarding velocity‐level redundancy resolution to acceleration‐level resolution. For the controllers presented in this chapter, the focus is on velocity‐level resolution. As summarized in Table 3.2, the presented controllers are able to handle a nonconvex projection set images, whereas the existing controllers [3840, 45] require the projection set to be convex. Additionally, because of the lack of direct position feedback in the methods of [3840, 45], they cannot achieve position regulation control with respect to a fixed position. For a time‐varying reference position, they require the initial position of the end‐effector on the desired reference path for position tracking. By contrast, the two presented controllers are able to cope with both time‐invariant regulation and time‐varying tracking problems, and they allow the manipulator's end‐effector to be initialized at any position in the workspace. Additionally, in the case of additive noise, the existing neural network solutions [3840, 45] suffer from error accumulation, whereas the presented neural controllers, benefiting from the availability of position feedback, have a bounded error that does not accumulate with time. Another major difference, as indicated in Table 3.2, is that the presented controllers satisfy the set constraint images. By contrast, the controllers presented in [3840, 45] are designed to ensure that images falls into the set images only after the steady state is reached. Accordingly, a peaking phenomena often occurs in the transition to the steady state. Theoretically, the controllers in [3840, 45] and those presented in this chapter are all guaranteed to converge. Because of structural differences, they require different numbers of neurons (see Table 3.2) to execute the task of end‐effector tracking in a three‐dimensional workspace using the PUMA 560.

Table 3.2 Comparisons of different RNN algorithms for the tracking control of a PUMA 560 manipulator.

  Non‐convex images Initial Number of Convergence Regulation Tracking Error Acceleration
  images images position neurons   error error accumulation vs. velocity
Controller (3.19) Yes Yes Any 6        Yes Zero Non‐zero No Velocity
Controller (3.41) Yes Yes Any 9        Yes Zero Zero No Velocity
Controller (3.9) [40] No No Restrictivea) 9        Yes Failb) Zero Yes Velocity
Controller in [39] No No Restrictivea) 15        Yes Failb) Zero Yes Velocity
Controller in [38] No No Restrictivea) 6        Yes Failb) Zero Yes Acceleration
Controller in [45] No No Restrictivea) 9        Yes Failb) Zero Yes Acceleration

a) For the controllers presented in [3840, 45], the end‐effector's initial position is required to be on the desired trajectory for tracking.

b) The controllers presented in [3840, 45] are able to achieve the tracking of a time‐varying reference position but fail for position regulation to a fixed reference position.

Table 3.3 The RMS position tracking errors of the different controllers.

  images images images
Controller 1 (3.19) 0.0102 0.0155 0.0293
Controller 2 (3.41) 0.0034 0.0082 0.0286
Controller 3 [40] 0.0182 0.1123 0.5005
Controller 4 [39] 0.0347 0.1977 0.4630

In addition, we compare the presented solutions with the existing solutions in terms of their robustness to additive noise. Because the neural controllers presented in this chapter are focused on the velocity‐level kinematic control of redundant manipulators, we compare them with neural controllers presented in [39, 40] in particular, which also address the same problem at the velocity level. For the simulations of all compared controllers, we injected random Gaussian noise into the system at three different levels of images. As observed from Figures 3.73.12, when controller 1 and controller 2 are used, which are the presented controllers corresponding to (3.19) and (3.41), respectively, the position tracking errors (images, images, and images along the images, images, and images directions, respectively) decrease over time and ultimately lie within a bounded range. By contrast, for controller 3 (presented in [40]) and controller 4 (presented in [39]), although the end‐effectors are placed on the desired path at time images (as shown in Figures 3.73.12, which indicates that all initial errors are zero for controllers 3 and 4), in the presence of additive noise, the tracking errors tend to diverge over time. As the noise level images increases, the divergence of the tracking errors also increases when controller 3 or 4 is used. As shown in Table 3.3, the root‐mean‐square (RMS) tracking error between images s and images s is 0.0182 for controller 3 and 0.0347 for controller 4 in the case of images. These values increase to 0.1123 and 0.1977 for controllers 3 and 4, respectively, when images and further increase to 0.5005 for controller 3 and 0.4630 for controller 4 when images. By contrast, the RMS tracking errors between images s and images s for controllers 1 and 2 always remain within a bounded range of values that are much lower than the initial tracking errors, thereby confirming their effectiveness.

c03f007

Figure 3.7 Simulation comparisons for the tracking control of the end‐effector of a PUMA 560 with respect to a time‐varying reference along a circular path using different neuro‐controllers, i.e. controller 1 (control law (3.19) presented in this chapter), controller 2 (control law (3.41) presented in this chapter), controller 3 [40], and controller 4 [39], in the presence of random Gaussian noise at different levels of images. (a) Controller 1, noise level images and (b) controller 1, noise level images

c03f008

Figure 3.8 Simulation comparisons for the tracking control of the end‐effector of a PUMA 560 with respect to a time‐varying reference along a circular path using different neuro‐controllers, i.e. controller 1 (control law (3.19) proposed in this chapter), controller 2 (control law (3.41) presented in this chapter), controller 3 [40], and controller 4 [39], in the presence of random Gaussian noise at different levels of images. (a) Controller 1, noise level images and (b) controller 2, noise level images.

c03f009

Figure 3.9 Simulation comparisons for the tracking control of the end‐effector of a PUMA 560 with respect to a time‐varying reference along a circular path using different neuro‐controllers, i.e. controller 1 (control law (3.19) proposed in this chapter), controller 2 (control law (3.41) presented in this chapter), controller 3 [40], and controller 4 [39], in the presence of random Gaussian noise at different levels of images. (a) Controller 2, noise level images and (b) controller 2, noise level images.

c03f010

Figure 3.10 Simulation comparisons for the tracking control of the end‐effector of a PUMA 560 with respect to a time‐varying reference along a circular path using different neuro‐controllers, i.e. controller 1 (control law (3.19) proposed in this chapter), controller 2 (control law (3.41) presented in this chapter), controller 3 [40], and controller 4 [39], in the presence of random Gaussian noise at different levels of images. (a) Controller 3, noise level images and (b) controller 3, noise level images.

c03f011

Figure 3.11 Simulation comparisons for the tracking control of the end‐effector of a PUMA 560 with respect to a time‐varying reference along a circular path using different neuro‐controllers, i.e. controller 1 (control law (3.19) proposed in this chapter), controller 2 (control law (3.41) presented in this chapter), controller 3 [40], and controller 4 [39], in the presence of random Gaussian noise at different levels of images. (a) Controller 3, noise level images and (b) controller 4, noise level images.

c03f012

Figure 3.12 Simulation comparisons for the tracking control of the end‐effector of a PUMA 560 with respect to a time‐varying reference along a circular path using different neuro‐controllers, i.e. controller 1 (control law (3.19) presented in this chapter), controller 2 (control law (3.41) presented in this chapter), controller 3 [40], and controller 4 [39], in the presence of random Gaussian noise at different levels of images. (a) Controller 4, noise level images and (b) controller 4, noise level images.

3.6 Summary

This chapter addresses the control of redundant manipulators using a neural‐network‐based approach. Two limitations of existing RNN solutions are identified, and modified models are established to overcome these limitations. Rigorous theoretical proofs are supplied to verify the stability of the presented models. Simulation results confirm the effectiveness of the presented solutions and demonstrate their advantages over existing neural solutions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.135.175