Chapter 10

Robot Trajectory Generation in Joint Space

Abstract

In order to generate desired robot trajectory, this chapter proposes a different method from the previous works like inverse kinematics and dynamic time warping. The classic hidden Markov model (HMM) is modified such that it is more feasible to generate trajectory in joint space. We introduce a new auxiliary output in HMM to help the training process. The Lloyd's algorithm is modified for HMM, such that it can solve the problems in joint space learning. The proposed method is validated with a two-link planar robot and a four degree-of-freedom (4-DoF) exoskeleton robot.

Keywords

Joint space; Tarjectory generation; Hidden Markov model

10.1 Codebook and key-points generation

The dynamics of a n-link robot in joint space can be written as

M(q)q+c(q,q)q+g(q)+F(q)=τ

Image (10.1)

where qnImage denotes the links angles, qnImage denotes the links velocity, M(q)n×nImage is the inertia matrix, c(q,q)n×nImage is the centripetal and Coriolis matrix, g(q)nImage is the gravity vector, FnImage is the frictional terms, and τnImage is the input control vector.

The object is to generate the desired trajectory qImage from the demonstrations in the joint space Q=[Q1Qn]TImage, Qi=[X1iXmi]TImage. Here, the trajectory XriImage defines ith joint and rth trajectory; it is defined as Xri={qi(l)}Image, l=1TriImage, TriImage is the total sample number of the joint angles. There are m trajectories for each joint. The trajectory number m is not necessary large, because a human cannot repeat the same motion only so many times and this may cause a repeat calculation in HMM. The data length TriImage is different from one demonstration to another, because of a different speed, and a starting/ending time.

Each joint for each demonstration has its own codebook defined as CiImage, i=1nImage. A codebook can be regarded as a NiImage dimension vector, i.e., Ci=[ci1ciNi]TImage. The codebook dimension NiImage is selected by prior knowledge of the trajectory's geometry.

We use Lloyd's algorithm to train the codebook Ci(t)Image. Here, t is noted as the training time. The initial codebook is defined as Ci(1)Image. A bad initial condition may reach local minima. A heuristics method [160] is used to find a better Ci(1)Image. Since the heuristics method needs a lot time, Ci(1)Image can also be selected randomly from XriImage.

The objective of using Lloyd's algorithm to create the codebook is to minimize a quantization error with certain data distribution. We need nearest-neighbor and centroid conditions, which are commonly used in Lloyd's algorithm. For one point qi(l1)Image in the trajectory, we want to find the nearest codebook element by calculating

min1jN|qi(l1)cij|,l1=1T,i=1n

Image (10.2)

If the nearest codebook element is cikImage, the region RijImage (i=1nImage, j=1NiImage) is defined as

Rik={qi(l1)}=[rik1rikpik],k=1sri

Image (10.3)

where pikImage is the length of the region RikImage, sriImage is the region number of the joint i and the demonstration r.

Obversely, the center of the region RijImage should be cijImage. The new center of RijImage can be calculated as

cij=1pijpijl=1rijl

Image (10.4)

It is the centroid condition. The normal Lloyd's algorithm uses selected points to construct the centers [42]. We calculate the mean value of all points in the region RijImage. The advantage is that it can be updated online.

With (10.3) and (10.4), we can calculate cij(t)Image recursively. We use the following quantization error to design a stop criterion. The average quantization error for Xij={qi(l)}Image, l=1TImage, is defined as

εij(t)=1TTl=1|qi(l)cij(t)|

Image (10.5)

where t is the recursive calculation times. We define a relative quantization error as

Δεij(t)=|εij(t+1)εij(t)εij(t+1)|

Image (10.6)

The codebook calculation is stopped when the relative error (10.6) is small enough as

Δεij(t)ˉε

Image (10.7)

where ˉεImage is a prior defined upper bound.

Remark 10.1

We use Lloyd's algorithm to quantize the robot joint trajectories and obtain the codebook. Since we use the minimum distance as in (10.2), the codebook obtained from the above recursive method is optimal; see [42]. This means each joint angle qi(l)Image has a minimal distance to its codebook element. The quantization is in Y-axis (joint angle axis). We do not consider time-axis. So the similarity of the trajectories with different time or speed can be measured directly. The advantage of this method is that we do not need dynamic time warping (DTW) as [140].

In order to train the hidden Markov model, we need key-points and observation symbols. These will be generated from the above codebook. The key-points are the start point, the end point o, and the center point in the time-axis of the codebook. If the time index of cijImage in (10.4) is defined as tijImage (i=1nImage, j=1NiImage), then the key-points are calculated as

kij=tij+ti(j+1)tij2

Image (10.8)

For each demonstration the key-points set are Kri=[ki1ki,vri]TImage (i=1nImage, r=1mImage), vriImage is the key point number in KriImage. Since the key-point is in the center of the RijImage and the starting and ending points of the trajectory are also key-points, vri=sri+2Image.

The observation symbols are the joint angles at the time of key-points. When they are less than the codebook, we use the codebook values. Ori=[oi1oiNi]TImage (i=1nImage, r=1mImage):

{oij(,q)=ci,j+1if cijqri(k)ci,j+1oij(s,)=j+1otherwise,j=1Ni,k=1vri

Image (10.9)

where s is the symbol of the observation, q is the value of the codebook.

Lloyd's algorithm allows us to set the dimensions of the key-points and the observation symbols are the same as that of the codebook. We avoid the manual tuning process as in [42,140,152].

We use the following example to explain how to use Lloyd's algorithm to generate the codebook, key-points, and observation symbols.

Example 10.1

A robot draws a circle. The angle of Joint-1 is shown in Fig. 10.1. We first show how the codebook number N will affect the results. The two square waves corresponds to N=3Image and N=4Image. When N=3Image, there are 3 quantizations in the Y-axis. They generate 5 regions in the trajectory, s11=5Image, R11R15Image. The key-points are in the center of each segment of the square waves. With the starting point and ending point, the key-point number of K11Image is 7, v11=s11+2=7Image. When N=4Image, the region number in (10.3) is s11=7Image, and the key-point number is v11=9Image. If there are 3 demonstrations, then r=1,2,3Image; see Fig. 10.2. The velocities are different; usually the dynamic time warping is needed to put them together in the time-axis. Here, we use Lloyd's algorithm; the key-point number of these three demonstrations are the same, N=3Image, s11=s21=s31=5Image. Although the key-points in the time-axis are different, the observation symbols in the Y-axis are similar.

Image
Figure 10.1 The codebook number N = 3 (broken line) and N = 4 (continuous line).
Image
Figure 10.2 Different demonstrations with similar observation symbols.

The following algorithm explains the calculation process of the codebook CiImage, key-points KriImage, and the observation symbols OriImage.

Algorithm 1

Algorithm 10.1

Calculation of codebook, key-points, and observation symbols.

Obtain demonstrations Q=[Q1Qn]TImage, Qi=[X1iXmi]TImage

Set the codebook size N

For i=1Image to n

For r=1Image to m

t=1 Image

While Δεij(t)ˉεImage

  For j=1Image to NiImage

    Calculate the region Rij(t)Image with (10.3)

    Calculate center cij(t)Image with (10.4)

    Calculate the relative quantization error with (10.6)

  End for j

   t=t+1 Image

End while

calculate key-points KriImage with (10.8)

calculate of the observation symbols OriImage with (10.9)

End for r

End for i

10.2 Joint space trajectory generation with a modified hidden Markov model

After the codebook, key-points, and the observation symbols are generated by Lloyd's algorithm, they are used for training a discrete Hidden Markov Model (HMM). This model can generate a desired trajectory in the joint space as we want; see Fig. 10.3.

Image
Figure 10.3 Robot trajectory generation using HMM: (A) joint space, (B) task space.

A HMM can be understood as a finite state machine where at any instant N, the different states are defined as S={S1,S2,S3...SN}Image. At regular intervals, the HMM may change its states. The time associated with the changes are denoted by t=1,2,...Image. The changed states are denoted by htImage. In the case of process modeling, the change (transition) probabilities only depend on the current state tkImage and its previous states tk1Image, tk2,Image

P(ht=Sj|ht1=Sk,ht2=Sk,...)=P(ht=Sj|htk=Sk)

Image (10.10)

where t=1,2,...Image, k=t1,t2,...Image.

Since the right-hand side of the above equality is independent of t, each transition probability defined aijImage as

aij=P(ht=Sj|ht1=Si),1i,jN

Image (10.11)

Now we define the transition probability distribution matrix as A={aij}Image. If each state can lead to any other state in a single transition, aij>0Image.

The number of distinct observation is defined as M. The symbols corresponding to the observable output of the system is defined as

V={v1,v2,...,vM}

Image (10.12)

The distribution of observing the symbol is defined as B={bj(k)}Image. This probability distribution of the symbol k in state j is

bj(k)=P[vk|ht=Sj],1<j<N,1<k<M

Image (10.13)

A graphical representation of a HMM is Fig. 10.4. This is a graphical representation of the HMM of Fig. 10.2. Here, the codebook number N1=4Image, the hidden state number v1=7Image. The configuration of this HMM is the well-known left–right topology.

Image
Figure 10.4 A hidden Markov model of the trajectory in Fig. 10.2.

The probability distribution of the initial state is defined as π={πi}Image:

πi=P[h1=Si],1<i<N

Image (10.14)

Given suitable for N, M, A, B, and π, a HMM generates a sequence of observations as O=[O1,O2,...OT]Image according to the following algorithm. Each observation OiImage is a symbol of the set V in (10.12):

  1. 1.  Select an initial state h1=SiImage according to the initial distribution πiImage. Set time t=1Image.
  2. 2.  Select Ot=vkImage according to the observation probability distribution of the symbols SiImage, i.e., bi(k)Image.
  3. 3.  Make a transition to the new state ht+l=SjImage according to the probability distribution of SiImage, i.e., aijImage.
  4. 4.  Set the time interval t=t+1Image, go to Step 2, until t=TImage.

With the above procedure, a HMM can be represented as

λ=(A,B,π)

Image (10.15)

For pth joint, the HMM λiImage can be considered a generalization of the following mixture model [116]:

λp=(Ap,Bp,πp)

Image (10.16)

Since the difference in joint space is much bigger than task space with respect to motion speed and starting point, and the demonstrations cannot be many. The HMM model (10.16) cannot generate a desired trajectory with good accuracy in the joint space. We modify the HMM (10.16) into the following form:

λp=(Ap,Bp,πp,ˆqp)

Image (10.17)

where ˆqp=[ˆqp1ˆqpNp]TImage is the output of the HMM (scaled joint angle), which is defined in (10.1).

The output of HMM ˆqpImage is also regarded as one demonstration, and is sent to HMM again for its training.

The number of states ApImage relates to the number of key-points. Although the states in HMM are hidden, and the algorithm of the codebook and key-points generation represents the physical significance of the trajectory. The state of pth joint is defined as Sp=[sp1,...spvp]TImage. The hidden state number vpImage is the maximum of the key-point number:

vp=max1rm{vrp}

Image (10.18)

where m is the demonstration number, vpImage is the hidden state number, and NpImage is the codebook number.

We use the key-point number vrpImage as the number of states of HMM. Since vrpImage depends on the codebook CpImage, the m trajectories have the same key-point number, which is defined as vpImage. The missing points are assigned with the correspond values of the other demonstration. The observation symbols are Op=[op1,opNp]TImage (p=1nImage).

In order to train the model (10.17) with the data generated the Lloyd's algorithm in the last section, we use the following four steps:

1) Given a sequence of observations O=O1O2...OTImage and the model λp=(Ap,Bp,πp,qp)Image, calculate the probability of the sequence of observations P(O|λp)Image.

The direct way to calculate the probability of O1O2...OTImage with respect to λpImage is enumerating every possible state sequence of length T (the number of observations). Considering a sequence of fixed states H=h1h2...hTImage, the probability of the observation sequence O=O1O2...OTImage for this sequence is

P(O|H,λp)=Tt=1P(Ot|ht,λp)

Image (10.19)

Under the independence assumption for the observations, the joint probability of O and H is

P(O|H,λp)=bh1(O1)bh2(O2)bhT(OT)

Image (10.20)

The probability of O is calculated by adding this joint probability over all possible state sequences hiImage:

P(O|λp)=πah1h2ah2h3ahT1hT=h1h2...hTπibh1(O1)ah1h2ah2h3ahT1hTbhT(OT)

Image (10.21)

We use the following feedforward–feedback process to calculate (10.21).

We define αt(i)Image as

αt(i)=P(O1,O2,...Ot,(ht=Si|λp))βt(i)=P(Ot+1,Ot+2,...OT,(ht=Si|λp))

Image (10.22)

where αt(i)Image is the probability of the sequence of partial observation (up to time t<TImage) and the state SiImage. βt(i)Image is the probability of the partial observation sequence from t+1Image.

The feedforward process is as follows: a) start from α1(i)=πibi(O1)Image; then

at+1=[Ni=1αt(i)ai,j]bj(O1)

Image

finally, P(O|λp)Ni=1αtImage.

The feedback process is as follows: b) start from βT(i)=1Image; then

βt(i)=j=1aijbj(Ot+1)βt+1(j)

Image

finally, P(O|λp)j=1βt+1(j)πibi(O1)Image.

2) Find a sequence Si=[si1,...sivi]TImage such that the probability of occurrence of the sequence O=O1,O2,...,OTImage is an optimal sequence in the sense of

δT(i)=maxh1,h2,...,hT1P(h1h2...hT=i,O1O2...OT|λp)

Image (10.23)

We need the highest probability through a single path. Considering the first t observation:

δt+1(j)=[maxδt(i)aij]bj(Ot+1)

Image (10.24)

In order to maximize the equation at each t,jImage, we use the following recursive procedure: a) start from δt(i)=πibi(O1)Image; b) then δt(j)=max[δt1(i)aij]bj(Ot)Image; c) finally,

ST=argmax1iN[δT(i)]

Image

This is to find maxim possible state in the codes book, i.e., we want to find

max1lvi{bilj},j=1Ni

Image

The index of the maximum biljImage is scjImage. Then we obtain the symbol set {sc1scNi}Image.

3) Use P(O|λp)Image in Step 1 and the optimal sequence in Step 2 to train the HMM. This means training the λpImage with SpImage to maximize P[Op|λp]Image.

The value of the state in the instant t is defined as r(t)Image. The state transition matrix Ap={aplj}Image:

aplj=P[r(t+1)=spj|r(t)=spl],1l,jvp

Image (10.25)

(10.25) represents the probability of being in the state spjImage at time t+1Image given the state silImage at time t. The initial elements ailjImage of the state transition matrix Ai(1)Image is selected as uniform distribution as

{jailj=1l<jailj=0otherwisei=1n

Image

The observation probability matrix Bp={bplj}Image is

bplj=P[opl at t|r(t)=spj],1lNp,1jvp

Image (10.26)

This means the states cannot go back, and will jump when the key-points miss from the observed sequences. The initial condition for biljImage is

{bilj=0oil is the symbol of the demonstration lbilj=P[oil() at t|r(1)=sij]=1otherwise

Image

The initial states distribution πp(1)=[πp1πpNp]=[1,00]Image,

πpj(1)=P[r(1)=spj],1<j<Np

Image (10.27)

We define

ξt(i,j)=αt(i)aijbj(Ot+1)βt+1(j)P(O|λ)αt(i)aijbj(Ot+1)βt+1(j)Ni=1Nj=1αt(i)aijbj(Ot+1)βt+1(j)γt(i)=Nj=1ξt(i,j)

Image

Here, ξt(i,j)Image gives the characteristic of a probability measure, and γt(i)Image is the probability of being STImage given the sequence of observations O.

By the Baum–Welch algorithm [9], the probabilities P[r(t+1)=sij|r(t)=sil]Image and the emission distribution P[oil at t|r(t)=sij]Image are

aij=T1t=1ξt(i,j)/T1t=1γt(i)bj(k)=s.tOt=vkγt(i)/T1t=1γt(i)

Image

4) Decode these symbols into values qpImage.

The output ˆqpImage can be regarded as the decoded values of the observation symbols from the codebook:

ˆqp=[ˆqi1ˆqiNi]=[oi,sc1(.,h)oi,scNi(.,h)]

Image

The output of the HMM ˆqImage is a discrete state. In order to generate a smooth trajectory, we use spline interpolation. We use the following third-order (cubic) spline:

st(x)=at(xˆqt)3+bt(xˆqt)2+ct(xˆqt)+dt

Image (10.28)

where t=1,2TiImage, is the number of the piecewise functions. Since the time index of the output is the key-points KriImage in (10.8), the total time of the trajectory ˆqImage is

Ti=1vivij=1kij,i=1n

Image

When the discrete point xtImage is the output of the HMM xijImage, the spline function gives a smooth trajectory ˆq(t)Image. After time scale, the desired trajectory is

ˆq(αt)

Image (10.29)

where α is time scale factor in joint space. Finally, we give the complete scheme of our algorithm; see Fig. 10.5.

Image
Figure 10.5 Trajectory generation via modified HMM and Lloyd's algorithm.

10.3 Experiments of learning trajectory

We evaluate the effectiveness of the proposed method with two examples: a two-link revolute joint arm (planar elbow manipulator) and our 4-DoF exoskeleton robot (CINVESRobot-1). The experiments are implemented in Matlab without code optimization. The computer is a PC with Intel Core i3 3.30 GHz processor. We use Kevin Murphy's HMM Toolbox [98] (Baum–Welch algorithm [9]) to train the HMM.

We compare our method (LHMM) with the other two approaches: HMM in task space with the inverse kinematics calculation (THMM), and HMM in joint space with the dynamic time warping (JHMM).

10.3.1 Two-link planar elbow manipulator

We use this example to show our method can learn space and time differences in the demonstrations, and to show how the time difference affects the training results. There are about 8%Image (0.1 second) differences.

The definitions of the two-link robot are shown in Fig. 10.6. By direct calculation, the forward kinematic and the inverse kinematics are

x=l1cosq1+l2cos(q1+q2),y=l1sinq1+l2sin(q1+q2)

Image (10.30)

q2=tan1±1D2D,D=x2+y2l21l222l1l2q1=tan1(yx)tan1(l2sinq2l1+l2cosq2)

Image (10.31)

Image
Figure 10.6 Two-link planar elbow manipulator.

1) We first show how our algorithm works with time difference. We draw three similar broken lines in task space with different velocities; see Fig. 10.7. These lines are almost the same in task space. However, in joint space they are different; see Fig. 10.8.

Image
Figure 10.7 Demonstrations (dotted-line) and generated trajectory (solid-line) in task space with LHMM.
Image
Figure 10.8 Demonstrations (dotted-line) and generated trajectory (solid-line) in joint space with LHMM.

We use N1=N2=100Image as the codebook numbers for the two joints. These values are chosen heuristically to balance computation complexity and the modeling accuracy. The key-point numbers of the joints are obtained from Lloyd's algorithm, for Joint-1, v11=62Image, v21=63Image, v31=61Image; for Joint-2, v12=76Image, v22=77Image, v32=75Image. By (10.18), the state numbers of HMM for each joint are v1=63Image, v2=77Image. After the HMM is trained, it generates desired trajectories q1Image and q2Image with different velocity (10.29). If we select α=1Image, the solid lines Fig. 10.8 are the desired joint angles. We hope that the robot moves two times faster than its demonstrations. We select α=0.5Image, and is shown by the solid lines in Fig. 10.9. It is interesting to see that the corresponding task space trajectory does not change, which is the solid line in Fig. 10.8.

Image
Figure 10.9 Two times faster demonstrations (dotted-line) and generated trajectory (solid-line) in joint space with LHMM.

2) The second test is to show how our algorithm works for the space difference. The codebook numbers are also N1=N2=100Image. The key-point numbers are v11=60Image, v21=51Image, v31=55Image, v12=64Image, v22=68Image, v23=71Image. The generated trajectory in the task space is the solid line in Fig. 10.10. Their trajectories in joint space are shown in Fig. 10.11.

Image
Figure 10.10 Space different demonstrations (dotted-line) and generated trajectory (solid-line) in task space with LHMM and forward kinematic.
Image
Figure 10.11 Space different demonstrations (dotted-line) and generated trajectory (solid-line) in joint space with LHMM.

3) THMM works very well for the speed difference (Fig. 10.8), because the demonstrations in task space are similar, and the inverse kinematics (10.31) are known. However, it does not work well for the space difference (Fig. 10.10), because HMM cannot work well with less demonstrations in task space. On the other hand, our method works in joint space with Lloyd's algorithm. It is not affected by these demonstrations; see Fig. 10.12.

Image
Figure 10.12 Space different demonstrations (dotted-line) and generated trajectory (solid-line) in task space with THMM.

JHMM can work for both speed and space differences. Fig. 10.13 shows the results of the speed difference. The accuracy of JHMM is less than our LHMM; see Fig. 10.10 and Fig. 10.11. The error comes from the dynamic time warping.

Image
Figure 10.13 Demonstrations (dotted-line) and generated trajectory (solid-line) in task space with JHMM.

The modified HMM (10.17) uses the output, and it can learn the joint space trajectories with small number of demonstrations (three training samples), while the original HMM (10.16) cannot generate the joint space trajectories with three demonstrations.

10.3.2 4-DoF upper limb exoskeleton

The computer control platform for our upper limb exoskeleton is CINVESRobot-1. We first draw a 3-D “O” in the task space three times; see Fig. 10.14. At the same time, the joint angles are saved in the computer as the training trajectories; see Fig. 10.15. Then we use our modified HMM to train a model. Since the drawing speeds are different, the data size of the three demonstration are 629, 576, and 615. We select the codebook numbers N1=N2=N3=14Image. The key-point numbers are v11=29Image, v12=32Image, v13=30Image. (See also Fig. 10.16.) After the HMM is trained, we used it to generate desired trajectories in the joint space; see the solid lines in Fig. 10.17. These trajectories are sent to the lower level controller. The three motors use PID control to follow the trajectories. As a result, the robot draws another “O” in the task space; see the solid line in Fig. 10.18.

Image
Figure 10.14 Draw “O” in task space.
Image
Figure 10.15 Training demonstrations for q1 with LHMM.
Image
Figure 10.16 Training demonstrations for q2 with LHMM.
Image
Figure 10.17 Training demonstrations for q2 with LHMM.
Image
Figure 10.18 Training demonstrations for q3 with LHMM.

An advantage of our joint space method is the time scale of the desired trajectory can be changed such that the “O” in the task space is faster than the demonstrations. The superhuman performance can be realized easily by our modified HMM. Fig. 10.19 shows the trajectory generated by our method is twice as fast as the training demonstrations; here, α=0.5Image in (10.29).

Image
Figure 10.19 CINVESRobot-1 draws “O” in task space with JHMM.

For this example, we do not compare our method LHMM with THMM, because the task space method THMM does not work with three demonstrations. JHMM can work in joint space for different speeds. Fig. 10.19 shows the result in task space. It is almost “O,” but the accuracy is much less than our LHMM, because the high dimension (3D) approximation of DTW is not good.

10.4 Conclusions

The advantage of programming robot by demonstration in joint space is to avoid the inverse kinematics. The disadvantage is that the demonstrations in joint space are strong time-dependent. Wer uses Lloyd's algorithm and modifies HMM to solve the problems in joint space.

We use Lloyd's algorithm to encode and quantize the input vectors. This approach improves accuracy compared with the dynamic time warping method. The traditional HMM is modified such that it can generate trajectories in joint space. Simulation and experimental results are also proposed. The results show that the proposed method is more effective to generate robot trajectory. Although there are few works in joint space, we believe this direct method can be applied in more robots in the future.

Bibliography

[9] L.E. Baum, T. Petrie, G. Soules, N. Weiss, A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains, Ann. Math. Stat. 1970;41(1):164–171.

[42] A.F. Gernot, Markov Models for Pattern Recognition: From Theory to Applications. Berlin: Springer; 2008.

[98] K. Murphy, Hidden Markov Model (HMM) Toolbox for MATLAB, 1998.

[116] L. Rabiner, A tutorial on hidden Markov models and selected applications in speech recognition, Proc. IEEE 1989;77(2):257–286.

[140] A. Vakanski, I. Mantegh, A. Irish, F. Janabi-Sharifi, Trajectory learning for robot programming by demonstration using hidden Markov model and dynamic time warping, IEEE Trans. Syst. Man Cybern., Part B, Cybern. 2012;42(4):1039–1052.

[152] J. Yang, Y. Xu, C.S. Chen, Human action learning via hidden Markov model, IEEE Trans. Syst. Man Cybern., Part A, Syst. Hum. 1997;27(1):34–44.

[160] Amelia Zafra, Mykola Pechenizkiy, Sebastián Ventura, HyDR-MI: a hybrid algorithm to reduce dimensionality in multiple instance learning, Inf. Sci. 2013;222:282–301.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.140.108