7
Neural Network Based Stewart Platform Control

7.1 Introduction

Kinematically redundant manipulators are referred to as those which have more degrees of freedom (DOFs) than required to determine the position and orientation. The redundancy of parallel manipulators can be utilized to overcome the obstacles, singularities [90], increasing workspace, improving dexterity and to optimize the performance and to achieve a smooth end‐effector motion task. As redundant robots have more DOFs than required, there usually exist multiple solutions for kinematic control, which motivates us to consider exploiting the extra DOFs to improve the control performance.

The inverse kinematics problem is one of the fundamental tasks in understanding the operability of parallel manipulators, i.e. to find the actuator inputs, providing the desired end‐effector trajectories. Conventional design of the parallel mechanisms often encounter singularity problems. The design of redundancy in the parallel mechanism often provides an effective remedy. In [91], the authors proposed a new 3‐DOF symmetric spherical 3‐UPS/S parallel mechanism with three prismatic actuators, and studied the kinematics, statics, and workspace of the mechanism. In [91], a 2(SP+SPR+SPU) serial‐parallel manipulator was considered. Based on the analysis, they designed three new types of kinematically redundant parallel mechanisms, including a new redundant 7‐DOF Stewart platform. In [92], the damped least square method was utilized to tackle the singularity problem. However, it only modifies the end‐effector path in terms of velocity.

Due to the outstanding performance in parallel processing and recursive computation, dynamic neural networks, as a special type of neural networks, have long been employed as a powerful tool for the control of conventional serial robot arms. Theoretical investigation of new types of neural models, and the applications of them to various real‐world problems have also received intensive research attention in recent years. In spite of the great success of dynamic neural networks in both theory and applications, reports are rare on using dynamic neural networks to address the kinematic resolution problem of Stewart platforms. In this chapter, we make progress in this direction. The problem is formulated to a constrained quadratic programming in this chapter. The Karush–Kuhn–Tucker (KKT) conditions of the problem are obtained by considering the problem in its dual space, and then we design a dynamic neural network to solve this problem recurrently. Due to the formulation of the problem from an optimization perspective, optimality in movement can be directly reached. Actually, for the Stewart platform, its forward kinematics are highly nonlinear and heavily coupled, which impose great challenges for the neural dynamic design to reach the same control goal. Additionally, due to the modeling of constraints as inequalities in the optimization problem, the obtained solution is direct in compliance with the physical constraints. Using convergence theories on projected neural networks, the global convergence of this dynamic neural network to the optimal solution is also proved rigorously. Simulation results verify the effectiveness in the tracking control of the Stewart platform for dynamic motions. The contributions of this study are threefold: first, to the best of our knowledge, this is the first attempt of using dynamic neural networks for kinematic redundancy resolution of parallel Stewart platforms. Secondly, in this chapter, both the minimization of the energy consumption and the feasibility of physical constraints are taken into account. Accordingly, the obtained solution is both optimal and feasible. This is in contrast to other control methodology, which usually cannot guarantee the control action is within a feasible set. Thirdly, different from feedforward neural networks, which usually suffer from local optimality, the dynamic neural approach employed in this chapter is globally optimal.

7.2 Preliminaries

The pose (position and orientation) of a rigid body in three‐dimensional space is uniquely determined by a translation, represented by a three‐dimensional vector, and a rotation, represented by a images rotational matrix in terms of three Euler angles. The rotational matrix images defined by the Euler angles images has the following property for its time derivative:

This property holds for all rotational matrices. Additionally, the rotational matrix is orthogonal, i.e. images, images with images denoting a images identity matrix.

For two vectors images and images, their cross‐product, denoted as images, is defined as:

(7.2) equation

The triple product of three vectors in three‐dimensional space is defined based on the cross product. For three vectors images, images, and images, their triple product is defined as images, and equals the following in value

(7.3) equation

The triple product is invariant under circular shifting:

The matrix on the right‐hand side in Equation (7.1) is a images skew‐symmetric matrix. A general images skew‐symmetric matrix, as can be simply verified, always holds for any images, images,

(7.5) equation

Due to the above relation, it is common to use images to represent a three‐dimensional skew‐symmetric matrix as:

7.3 Robot Kinematics

The Stewart platform is a typical parallel mechanism and can be extended to different forms by modifying its mechanisms. It includes a mobile platform on the top, a fixed base, and six independent driving legs connecting the aforementioned two parts. The two ends of each leg are fixed on the mobile platform and the fixed based, respectively, using universal joints. Each leg can be actuated to change its length by the adjustment of the distance between the two fixed points on the platform and the base. All together, the six legs collaborate to adjust the orientation and position of the mobile platform by changing their lengths.

7.3.1 Geometric Relation

For the Stewart platform, the global coordinate is fixed on the base and the platform coordinate is fixed on the mobile platform. images for images represents the position in global coordinates of the ith connection point on the base. images for images represents the position in platform coordinates of the ith connection point on the platform. We use images to represents its position in the global coordinate. images for images represents the vector corresponding to the ith leg, which points from the base to the platform. For a point images in the platform coordinate, its position images in the global coordinate is obtained after a rotational and translational transformation:

where images is the global coordinate of the zero position in the platform coordinate, and it corresponds to the translational transformation, images is the rotational matrix, which is uniquely defined by the Euler angles images

(7.8) equation

Following Equation (7.7), as to the ith connection point on the platform, i.e. the ones with images in the global coordinates or the ones with images in the platform coordinates, we have

(7.9) equation

Therefore, the ith leg vector can be further expressed as

For the vector images, we define images to represent its length. Accordingly, we have

Notice that both images and images are constants and are determined by the geometric structure. images defines the translation of the platform, and images, as a function of the Euler angles images, defines the rotation of the platform. Overall, the right‐hand side of (7.11) depends on the pose variables of the platform images while the left‐hand side of (7.11) is the length of the leg, which is controlled for actuation. In this sense, Equation (7.11) for images defines the kinematic relation between the actuation variables and the pose variables. For a six‐dimensional reference pose, the desired leg length images can be directly obtained from (7.11). However, in real applications, the reference is usually not six‐dimensional. For example, for surgical applications of the Stewart platform, people may only care about the position of an end‐effector on the platform, instead of its orientation. In this situation, the reference is three‐dimensional and we have three additional DOFs as redundancy. For such a situation, we usually have an infinite number of feasible solutions of images for images to reach the reference. Among the feasible solutions, we may be able to identify one, which outperforms others in terms of certain optimization criteria. This intuitive analysis motivates us to model it as an optimization problem and identify the optimal one for improved performance. However, due to the nonlinearity of Equation (7.11), direct treatment of Equation (7.11) is technically prohibitive. Instead of a direct solution in position space, we turn to solve the problem in its velocity space to exploit the approximate linearity.

7.3.2 Velocity Space Resolution

For easy treatment, Equation (7.11) is rewritten as

To obtain the velocity space relations, we first compute the time derivative on both sides of (7.12), which yields

Recall that both images and images are constants and their time derivatives, images and images, equal to zero. For the rotational matrix images, according to the preliminary equations (7.1) and (7.6), it has the following property for its time derivative

(7.14) equation

Therefore, images can be written as follows:

Substituting (7.15) into (7.13) yields

(7.16) equation

In the above equation, Equations (7.10) and (7.4) are used for the derivation in the second line and the derivation in the penultimate line, respectively. Noticing that images could be guaranteed by the mechanical structure, we have the following result

(7.17) equation

For the six‐dimensional vector images, we have the compact matrix form as follows:

where

Equation (7.18) gives the kinematic relation of a 6‐DOF Stewart platform from the velocity of the pose variables to the speed of the legs.

7.4 Problem Formulation as Constrained Optimization

Compared with Equation (7.11), Equation (7.18) significantly simplifies the problem as images in (7.18) is now affine to the images, while the relation between images and images (or images and images) in Equation (7.11) is nonlinear, or even nonconvex to the pose variables. Similar to our analysis before, in the case that the reference pose velocity is given in six dimensions, the solution of images can be solved directly from (7.18). However, in the situation that the reference pose velocity is described in dimensions lower than six, the extra redundancies are available to reach improved performance. The following equality models the reference velocity constraint in reduced dimensions

where the reference vector images with images is pre‐given, and the matrix images is the transformation matrix and is also pre‐given. As an example, if we would like to maintain the platform at a given height, i.e. images, we set images with images and images in (7.20). Due to the extra design freedom, the value of images usually cannot be uniquely solved from (7.20). We thus define the following criteria to optimize the solution

where images and images are both symmetric constant matrices and are both positive definite, and images is the controllable speed of the platform legs. In application, the term images can be used to specify the kinematic energy (including translational kinetic energy and rotational kinetic energy) by choosing a proper weighting matrix images (say choosing images as one formed from the mass of the platform and its moment of inertia for the kinematic energy case), and the term images is the input power consumed by the robotic system. This objective function also follows the convention of control theory. The decision variable images, which is controlled by the actuators, is subjected to physical constraints. In this chapter, we model the physical constraints as linear inequalities in the following form:

where images and images with images being an integer. Note that constraints are not imposed on the variable images since its value usually is specified as a feasible one in the planning stage. In summary, with Equation (7.21) as the object function, Equation (7.18) as the mapping relation, (7.20) for the reference tracking requirements, and (7.22) as physical constraints, we can formulate the kinematic control problem of the Stewart platform as the following constrained programming:

Due to the presence of both equation and inequality constraints in the optimization problem (7.23), usually it cannot be solved analytically. Conventional approaches introduce extra penalty terms formed by the constraints to the objective function and solve the problem numerically using gradient descent along the new objective function. However, penalty‐based approaches only reach an approximate solution of the problem and thus are not suitable for error‐sensitive applications. Instead of using this approximate approach, in the next section, we will propose a dynamic neural network solution, which asymptotically converges to the theoretical solution.

7.5 Dynamic Neural Network Model

In this section, we first consider the optimization problem (7.23) in its dual space and then present a neural network to solve it dynamically. After that, we investigate the hardware realization of the proposed model.

7.5.1 Neural Network Design

According to the KKT conditions, the solution of problem (7.23) satisfies:

where images, images (images is the number of rows in matrix images), and images are dual variables to the equation constraint (7.23b), the equation constraint (7.23c), and the inequality constraint (7.23d), respectively. The expression (7.24e) can be simplified to

(7.25) equation

where the nonlinear mapping images is a function which maps negative values to zero and non‐negative values to themselves. From Equation (7.24a), images can be solved as

Substituting images in (7.26) to Equations (7.24c) and (7.24d) yields

To eliminate images, we first represent it in terms of images and images according to (7.24b) as

then substitute (7.29) into Equations (7.24c) and (7.27), which results in

We use the following dynamics for the solutions of images, images, and images in Equations (7.28), (7.30), and (7.31):

(7.32) equation

where images is a scaling factor. Overall, the proposed dynamic neural network has images, images, and images as state variables and images in (7.27) as the output, which is expressed as follows:

About the proposed neural network model (7.33) for the kinematic redundancy resolution problem (7.23) of the parallel manipulator, we have the following remark.

7.6 Theoretical Results

In this section, we present theoretical results on the proposed neural networks for solving the redundancy resolution problem of parallel manipulators.

7.6.1 Optimality

In this section, we show the equilibrium point of the dynamic neural networks (7.33) ensures that the corresponding output images given by (7.33d) is identical to the optimal solution of the problem (7.23). On this point, we have the following theorem.

7.6.2 Stability

In this section, we present theoretical results on the stability of the proposed dynamic neural network model. In Section 7.6.1, we have concluded that the equilibrium point of the neural network (7.33) is optimal solution of (7.23). Generally speaking, a dynamic system may not converge to its equilibrium points. It may happen that a dynamic system evolves towards divergence, oscillation, or even chaos. It is necessary for the proposed neural network to converge for effective computation purposes. Before presenting the convergence results, we first present a lemma about a general projected dynamic system:

where images, images is a closed convex set of images, and images is the projection operator onto the set images.

This lemma gives general convergence results on dynamic systems with the presence of projection operators. In our system, the operator images is also a projection operator, which projects input values to non‐negative ones. With Lemma 7.1, it is provable for the following stability results on the proposed model (7.33),

7.6.3 Comparison with Other Control Schemes

In this section, we compare the proposed method with some existing control schemes for a parallel robotic platform.

In [93], feedforward neural network based approaches, including multilayer perception and RBF neural networks, were employed to learn the nonlinear mapping of the robot in its forward kinematics. Thanks to the nonlinear approximation power of feedforward neural networks, the static mapping error can be minimized by feeding enough training data to the model. However, to determine the values on the weights of neural networks, an off‐line training procedure is necessary before the method is used. Additionally, due to the lack of consideration for speed limits of each actuator, such constraint may not be satisfied in real applications. Moreover, due to the existence of remaining training error, the control precision largely depends on the size of the training set, and the similarity between the training data and the real ones. In [94], the authors proposed to use self organization mapping to approximate the nonlinearity from joint space to workspace. Due to the required training procedure, and the strong dependence of the performance on training data, this method also inevitably bears similar limitations as [93], such as no compliance to speed limitations and nonoptimality in kinematics. Differently, [95] considered the kinematic model in the design and used the sliding mode method for the tracking control of the Stewart platform. In comparison with methods presented in [93] and [95], this method is inherently an online control scheme, does not require any pre‐training, and enjoys a high control precision. However, without considering speed constraints, it still lacks compliance to such a physical restriction in speed. In [96], a fuzzy controller was designed for the online intelligent control of Stewart platforms. This controller is able to reach high‐precision tracking of signals in the workspace, but may suffer from similar limitations as the sliding mode based approach [95]. In contrast, since the method presented in this chapter directly considers the problem as an optimization problem, it reaches the optimal solution accordingly. Further, since the speed constraints are taken into account as inequality constraints in the problem formulation, compliance to constraints are thus achieved. The comparisons of different methods for kinematic control of Stewart platforms are summarized in Table 7.1, from which the advantage of the dynamic neural network based approach for the control of Stewart platforms is clear.

Table 7.1 Comparisons of different methods for kinematic control of Stewart platforms.

Optimality Online vs. Off‐line Pre‐training Compliance to extra constraints Precision
Approach in [93] No Off‐line Yes No Low
Approach in [94] No Off‐line Yes No Low
Approach in [95] No Online No No High
Approach in [96] No Online No No High
This chapter Yes Online No Yes High

7.7 Numerical Investigation

To validate the effectiveness of the proposed approach, in this section we apply the neural network model to the redundancy resolution of a physically constrained Stewart platform.

7.7.1 Simulation Setups

In the simulation, we consider a Stewart platform with the leg connectors on the mobile platform locating around a circle with radius 1.0 m at images, images, images, images, images, and images in the platform coordinate, and the leg connectors locating around a circle with radius 0.75 m at images, images, images, images, images, and images on the fixed base. For simplicity, the end‐effector is put at the origin of the platform coordinate (this can always be achieved by defining the platform coordinate with its origin at the end‐effector position). In the situation for position tracking in three‐dimensional space, the total redundancy is 3 as the input dimension is 6 while the output dimension is 3.

In the simulation, we use the tracking error, defined as the difference between the desired position and the real position at time images, to measure the tracking performance in the circular motion tracking, infinity‐sign curve tracking, and the square motion tracking situation.

7.7.2 Circular Trajectory

In this section, we consider the tracking of a smooth circular path using the proposed method. The desired motion of the end‐effector is to follow a circular trajectory at a speed of 2 m/s. The desired circle trajectory is centered at images with a radius of 0.08 m, and has a revolution angle around the images axis for images. In the simulation, we simply choose images and images as an identity matrix. The value of the matrix images is computed in real time according to (7.19). The matrix images is chosen as images such that the position tracking requirements can be achieved. In practice, the actuation speed of each leg is limited within a range due to the physical constraints of the actuators. To capture this property, we impose the constraint that the speed images is no greater than images in its absolute value, i.e. images for images, which is equivalent to images. Organizing to a matrix form yields an inequality in the form of (7.23d) with images, images with images representing a twelve‐dimensional vector with all entries equal one. In the simulation, we set the speed bound images m/s. The scaling factor images is chosen as images. The tracking results are shown in Figure 7.1 by running the simulation for 2 s. As shown in the Figure 7.1a, the end‐effector successfully tracks the circular path with a small tracking error (as shown in Figure 7.1b, where images, images, and images denote the components of the position tracking error images, respectively, along the images, images and images axes of the base frame; the errors are less than 0.015 m in amplitude). The circular‐path following experiments demonstrated the capability of the proposed dual neural network for online resolving of kinematic redundancy of physically constrained manipulators.

c07f001

Figure 7.1 Tracking of a circular motion. (a) The trajectory of the end‐effector and (b) the time history of the position tracking error.

c07f002

Figure 7.2 The time evolution of the Stewart platform state variables in the case of circular motion tracking. (a) Orientation of the platform images; (b) position of the end‐effector images; (c) control action images; and (d) leg length images.

Considering the input motion for the six legs, the Figure 7.2 shows the time profile of the Stewart platform state variables, i.e. the three Euler orientations of the mobile platform (Figure 7.2a), the end‐effector position (Figure 7.2b), the speed of each leg (Figure 7.2c) and the length of each leg (Figure 7.2d). The attached moving frame to the upper platform is exactly in the middle and the images and images coordinates started from zero and the images oscillates between almost 0.005 m and images m. The images coordinate started from approximately 0.125 m which is the altitude of the upper platform before the motion of the actuators. It was observed that the harmonic response is repeated every 1.25 s or images after that the cycle is repeated again. From Figure 7.2a, it is observed that there is a small drift from the neutral position. Despite the presence of this drifting in orientation, the desired motion, which is given in terms of end‐effector positions, is still achieved, as shown in Figure 7.3. This in turn validates the robustness of the proposed scheme in the presence of orientation drifting. The straight dashed line in Figure 7.2(c) depicts the bound images (images), which is the bound for the action speed. It can be observed that images converges to the region images very fast and stays approximately inside this region through the runtime, except for some short period (e.g. images at around time images s) due to the dynamics of the desired motion. The neural network state variables are plotted in Figure 7.3. From this figure, we can clearly observer the dynamic evolution of the neural activities. It is noteworthy that the neural activities do not converge to a constant value. Instead, they vary with time as they are utilized to compensate and regulate the dynamic motion of the robot.

c07f003

Figure 7.3 Simulation results on motion trajectories, manipulability measures and joint‐angle profiles of PUMA 560 synthesized by the scheme presented in [71] with its end‐effector fixed at [0.55, 0, 1.3] m in the workspace. (a) State variable images; (b) state variable images; and (c) state variable images.

7.7.3 Infinity‐Sign Trajectory

In this section, we consider the tracking of an infinity‐sign curve using the presented method. Specifically, we consider the tracking at a constant speed of 1 m/s. The desired trajectory is obtained by rotating a planar infinity‐sign curve analytically expressed as images with images 5m around the images axis for images. The center of this curve locates at images. The parameters, images, images, images, images, images, images, images, and images are chosen with the same value as in the circular trajectory tracking situation. Simulation results are obtained by running it for 2 s. As shown in the Figure 7.4, the end‐effector successfully tracks the desired path with a small tracking error. Figure 7.5 show the time profile of the Stewart platform state variables, including the three Euler orientations of the mobile platform (Figure 7.5a), the end‐effector position (Figure 7.5b), the speed of each leg (Figure 7.5c), and the length of each leg (Figure 7.5d). The neural network state variables are plotted in Figure 7.6. Due to the dynamic changing of the neural states, the nonlinearity in the Stewart platform is successfully compensated and an accurate tracking performance is thus achieved.

c07f004

Figure 7.4 The time evolution of the Stewart platform state variables in the case of circular motion tracking. (a) End‐effector trajectory and (b) position tracking error.

c07f005

Figure 7.5 The time evolution of the Stewart platform state variables in the case of infinity‐sign motion tracking. (a) Orientation of the platform images; (b) position of the end‐effector images; (c) control action images; and (d) leg length images.

c07f006

Figure 7.6 The time evolution of the neural network state variables in the case of infinity‐sign motion tracking. (a) State variable images; (b) state variable images; and (c) state variable images.

7.7.4 Square Trajectory

In this section, we investigate the square trajectory tracking using the proposed approach. Different from the case of smooth circular motion tracking, the desired square path is non‐smooth at the four corners and poses challenges to the controller on its real‐time performance. In the simulation, the desired motion of the end‐effector is to follow a square trajectory, which is centered at images with an edge length of 0.08 m, at the desired speed of 1.0 m/s. The square has a revolution angle around the images axis for images. We choose the parameters images and images, images and images to have the same values as in the situation for circular motion for simplicity. The parameter images is computed in real time according to (7.19). The speed limit bound images is chosen as images m/s, and images is accordingly chosen as images. The scaling factor images is chosen as images. The tracking results are shown in Figure 7.7 by running the simulation for 2 s. As shown in the Figure 7.7a, the end‐effector successfully tracks the square motion with a small tracking error (as shown in Figure 7.7b, where images, images, and images denote the components of the position tracking error images, respectively, along the images, images and images axes of the base frame; the errors are less than 0.006 m in amplitude). Note that the error curves have jerks at time images, and 1.8 s, which is a result of the velocity switching from one direction to another one around the corner of the square. Despite the existence of the jerks, due to the nonlinear feedback mechanism in the neural network, the errors reduce swiftly to a very low value (much lower than 0.001 m as shown in Figure 7.7b after the jerk). The above observation in turn validates the effectiveness of the proposed neural scheme in dealing with non‐smooth tracking problems.

c07f007

Figure 7.7 Tracking of a square motion. (a) End‐effector trajectory and (b) position tracking error.

The real‐time values on the Stewart platform states are shown in Figure 7.8. Similar to the situation for circle tracking, the presence of drifting in orientations, as observed in Figure 7.8a, does not affect the tracking performance. The end‐effector position evolves without drifting (Figure 7.8b), and remains a very small error from its desired trajectory (Figure 7.8b). Figure 7.8c shows the time profile of the control action, which is the speed of each leg. It is clear that the control action is approximately regulated inside the range images, which in turn validates the effect of the proposed solution in fulfilling the inequality regulation. The dynamic evolution of the neural activities is shown in Figure 7.9.

c07f008

Figure 7.8 The time evolution of the Stewart platform state variables in the case of infinity‐sign motion tracking. (a) Orientation of the platform images; (b) position of the end‐effector images; (c) control action images; and (d) leg length images.

c07f009

Figure 7.9 The time evolution of the neural network state variables in the case of square motion tracking. (a) State variable images; (b) state variable images; and (c) state variable images.

7.8 Summary

In this chapter, a dynamic neural network is designed to solve the redundancy resolution problem of Stewart platforms. The physical constraints and optimization criteria are rigorously modeled as a constrained quadratic programming problem. To solve this problem in real time, a recurrent neural network is proposed to reach the equality constraints, inequality constraints, and optimality criteria simultaneously. Rigorous theoretical proofs are supplied to verify the convergence of the proposed model. Simulation results validate the effectiveness of the proposed solution.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.217.187