© Navin Sabharwal, Shakuntala Gupta Edward 2020
N. Sabharwal, S. G. EdwardHands On Google Cloud SQL and Cloud Spannerhttps://doi.org/10.1007/978-1-4842-5537-7_2

2. Cloud SQL

Navin Sabharwal1  and Shakuntala Gupta Edward2
(1)
New Delhi, India
(2)
Ghaziabad, India
 
Cloud SQL is Google’s relational database. It’s a cloud hosted MySQL or PostgreSQL database.

Cloud SQL is a fully managed database service that makes it easy to set up, manage, maintain, and administer relational databases on the Google Cloud Platform.

—Google

Cloud SQL enables you to run the same SQL queries you are used to but within Google’s managed cloud. This offering is extremely helpful if you are migrating an existing application to a cloud database without much reengineering.

This chapter starts with Cloud SQL, and you will learn how to provision a Cloud SQL instance. However, prior to that, the chapter discuses what a relational database is.

Relational Databases

The following section introduces the relational database concepts and is more relevant to users who are new to RDBMS.

Relational databases have been a common choice of database for personal data, financial records, logistical information, manufacturing records, and other applications since the 1980s. This system complies with the relational model proposed by IBM’s E. F. Codd in 1970.

Relational databases refer to databases that store data in a structured format in the form of tables using rows and columns. The values within each table are related to each other. See Table 2-1.
Table 2-1

Students

23

Ram

IV

C

Vivekananda

25

Shyam

V

A

Thomas

As you can see in Table 2-1, each row refers to a student and the columns represent the attributes of that student.

In relational databases, tables and their schemas are defined before you start capturing the data. As an example, see Table 2-2, which specifies the attribute names, data type, and size.
Table 2-2

Student Table Schema

Roll Number

Number

Name

STRING [50]

Class

STRING [5]

Section

STRING [1]

Team

STRING [50]

Tables may also be related to each other. In order to understand relationships between tables, consider the following example.

In an organization there are various departments and each department has employees. Tables 2-3 and 2-4 show the schema definition for these tables.
Table 2-3

Department Table Schema

Department ID

STRING [5]

PRIMARY KEY

Department Name

STRING [50]

 
Table 2-4

Employee Table Schema

Employee ID

STRING [10]

PRIMARY KEY

Employee Name

STRING [50]

 

Department ID

STRING [5]

FOREIGN KEY

In addition to the data type and size, the attributes also include keys: the primary key and the foreign key.

Primary Key

Whenever an employee joins the organization , an ID is assigned to them, which uniquely identifies the employee. You cannot have two employees with the same ID and you can’t have an employee without an ID. Similar to that in a relational table, there are columns whose values uniquely identify each row. These columns are called the primary keys. For example, the Employee ID in the Employee table and the Department ID in Department table.

Foreign Key

When an employee joins the organization , a department is also assigned to the employee. When storing that information in the form of the table, you store the department ID along with the employee. Since the Department ID is the primary key of the Department table, it becomes the foreign key in the Employee table.

A column in a table that serves as the primary key of another table is called the foreign key. The foreign key links the two tables.

In the previous example, the Department ID key is used to link the two tables—Employee and Department . If you look at the data in the individual tables, it will look like Table 2-5.
Table 2-5

Department Table

Department ID

Department Name

D01

IT

D02

Admin

D03

HR

D04

EHS

D05

Finance

Table 2-6

Employee Table

Employee ID

Employee Name

Department ID

E1

Mike

D01

E2

Michelle

D01

E3

Dave

D02

E4

Kate

D03

E5

Chintamani

D04

E6

Natalie

D05

E7

Steffi

D02

If you have to determine the name of the department that Employee E1 belongs to, you simply use Department Code D01 mapped to E1 to retrieve the department, which is IT. Tables in relational databases are linked or related using primary keys and foreign keys.

Constraints

In addition to the data type, size, and key specifications, relational databases also enable you to define constraints while defining the schema. For example, a Unique Key constraint ensures uniqueness within the data of the column and a NOT NULL constraint says that the column cannot have a NULL (empty) value. All these together maintain the referential integrity constraints.

If a department is to be deleted, the foreign-primary key will help ensure that all related employees are either deleted or moved to another department.

If you alter the employee schema and add a new column, called Nickname, with a UNIQUE constraint and then modify the Employee Name and Department ID to have a NOT NULL constraint, you’ll get Table 2-7.
Table 2-7

Modified Employee Schema

Employee ID

STRING [10]

PRIMARY KEY

Employee Name

STRING [50]

NOT NULL

Nickname

STRING [10]

UNIQUE CONSTRAINT

Department ID

STRING [5]

FOREIGN KEY, NOT NULL

Note

Department ID is a foreign key and has the NOT NULL constraint associated with it.

If an employee is created, the UNIQUE constraint on Nickname will ensure that the same nickname is not used twice. The NOT NULL constraint will ensure that no employee is created without a Name or a Department. All these ensure that the table’s data is consistent.

Operations

Now that you have understanding of the schema, this section looks at the type of operations that can be performed on the data. The operations are grouped into DDL, DML, and read requests.

DDL

DDL stands for data definition language and it includes operations dealing with schema creation or modifications such as CREATE TABLE, DROP TABLE, and ALTER TABLE, to name a few. See Tables 2-8 through 2-10.
Table 2-8

CREATE TABLE: Employee Table

Employee ID

STRING [10]

Employee Name

STRING [50]

Table 2-9

ALTER TABLE: Employee Table Modified the Data Size for Column Name

Employee ID

STRING [10]

Employee Name

STRING [100]

Table 2-10

DROP TABLE : Employee Table Removed

Employee ID

STRING [10]

Employee Name

STRING [100]

DML

DML stands for data manipulation language and this includes operations that are used for manipulating data such as inserts, deletes, and updates. See Tables 2-11 through 2-14.
Table 2-11

Employee Table Data

Employee ID

Employee Name

E1

Mike

E2

Steffi

Table 2-12

Insert Data: Inserted New Rows from E3–E5

Employee ID

Employee Name

E1

Mike

E2

Steffi

E3

Michelle

E4

Steve

E5

Sandy

Table 2-13

Update Data : Updated Names for Employee ID E2 and E5

Employee ID

Employee Name

E1

Mike

E2

Steffi Tom

E3

Michelle

E4

Steve

E5

Sandy Riddle

Table 2-14

Delete Data : Deleted Employee ID E2

Employee ID

Employee Name

E1

Mike

E2

Steffi Tom

E3

Michelle

E4

Steve

E5

Sandy Riddle

Read Requests

Read requests are operations that are used for retrieving or querying the data stored within the table. See Table 2-15.
Table 2-15

Query Data: Showing All Employees with Name Starting with M

Employee ID

Employee Name

E1

Mike

E3

Michelle

Transactions

SQL (Structured Query Language) is the query language used for performing all these operations on the relational databases.

In general in relational databases, the operations of accessing and modifying data are grouped together into a logical unit of work. This logical unit of work is called a transaction.

You can better grasp what a transaction is with an example. Say you have a banking system. Suppose Account A has a balance of $400 and Account B has $700. Account A is transferring $100 to Account B. This is really two updates to the underlying database: First the amount in Account A needs to be reduced by $100 and then the amount in Account B needs to be increased by $100.

In relational databases, all the work is carried out in a transaction. Let’s refer to it as T1.
T1
BEGIN
Read Account A amount from the database: 400$
Read Account B amount from the database: 700$
Subtract 100$ from Account A: Resulting to 300$
Write Output 300$ to Account A in the database
Add 100$ to Account B: Resulting to 800$
Write Output 800$ to Account B in the database
COMMIT

Note

BEGIN indicates the start of the operation and COMMIT indicates that the operation is complete.

Transactions in relational databases use the ACID property . ACID stands for the four guarantees any relational database provides to ensure that database modifications are saved in a consistent, safe, and robust manner. It stands for Atomicity, Consistency, Isolation, and Durability.

Atomicity is the property that ensures that either all the operations of a transaction are reflected in the database or none are.

In the previous example, say the first operation, which was to reduce Account A by $100, is completed, but the second operation fails. In that case, Account A’s balance will be $300 while B’s remains unchanged ($700 instead of $800). This is unacceptable, because this means a loss of $100. Instead, either the transaction should fail without executing any operations or it should process both operations. This is ensured in an RDBMS using the atomicity property.

Consistency is used to preserve the data consistency. While discussing the databases, you read about constraints, which are rules defined to ensure data consistency. The consistency property ensures that none of the rules are violated when a transaction completes, and any transaction trying to violate the rule will not be allowed to complete. It ensures that at all times anyone reading any data from the database will always see consistent data—i.e., data that conforms to all defined rules.

Consistency also ensures that readers don’t read dirty and uncommitted changes. In the previous example, any time prior to the transaction COMMIT, a read query or a request for Account A’s balance or B’s balance will show the amount as [400$, 700$] and not [300$, 800$].

The isolation property ensures that if multiple transactions are concurrently performed on the database, individual updates of the transactions will never be interleaved in such a way as to result in bad data. The database must guarantee that updates happen in such a way that it appears as if all the transactions happened one after the other.

To understand this better, say you have one more transaction (T2) in which Account A is receiving $100 from another account, which is Account C. Account C has $500 in it. Let’s further assume that the transactions run concurrently, leading to interleaving of operations between the two transactions, as follows:
T1:
       BEGIN
       Read Account A amount from the database: 400$
       Read Account B amount from the database: 700$
T2:    BEGIN
       Read Account A amount from the database: 400$
       Read Account C amount from the database: 500$
       Subtract 100$ from Account C: Resulting to 400$
       Write Output 400$ to Account C in the database
       Add 100$ to Account A: Resulting to 500$
       Write Output 500$ to Account A in the database
       COMMIT
T1:
       Subtract 100$ from Account A: Resulting to 300$
       Write Output 300$ to Account A in the database
       Add 100$ to Account B: Resulting to 800$
       Write Output 800$ to Account B in the database
       COMMIT

Note that the final balance of Account A after completing both transactions will be $300, which means the $100 addition to Account A has been lost.

RDBMS use the isolation property to guarantee that this never happens. If the transactions were to run in isolation, the second transaction would read the correct balance as $300 once the first transaction was complete.

In effect, isolation means for every pair of transactions on the same dataset, one transaction will start execution only after the other execution is done.

Durability means that any changes done when a transaction is successful are permanent, i.e., whatever happens, the transaction will not disappear.

Whenever an UPDATE statement is issued, the update first happens in memory and then the data on the disk is updated. Let’s suppose the data in memory was updated but prior to the update on the disk, a power failure happens. Since memory is volatile by nature this may lead to losing the update. However, the durability guarantee of the RDBMS ensures safety against such scenarios. The recovery management component manages the transactions in such a way that the transactions remain durable and no data loss happens in any situation.

This ACID property makes relational databases best suited for transactional workloads and is the key feature that differentiates relational database systems from modern non-relational databases.

In conclusion, relational databases are best suited for structured data with known schemas and relations and that needs to be consistent. Now that you have learned about relational databases, let’s begin with Cloud SQL.

Getting Started with Cloud SQL

As described earlier, Cloud SQL is Google’s fully managed SQL database service. Fully managed means that the mundane but necessary and often time-consuming tasks such as applying patches and updates, managing backup, configuring replications, etc. are all handled by Google so users simply need to focus on building the application.

A Cloud SQL instance supports two open source SQL implementations (database engines): MySQL and PostgreSQL.

MySQL is
  • Best suited for simple querying

  • Works exceptionally well in read-heavy systems expecting high read speed

  • Underperforms with heavy loads or complex queries

  • Apt for requirements of high speed and ease of setting up

PostgreSQL is
  • Best suited for high-volume data operations and complex queries

  • Works exceptionally well with systems requiring fast read and write speeds and extensive data analysis and validation

  • Supports a variety of performance enhancers as available in commercial solutions such as SQL Server

  • Apt for requirements of complex processing, integration, data integrity, and intricate designs.

Put another way, if MySQL is the Honda Activa 5G, PostgreSQL is sort of like the Royal Enfield Bullet (a four-stroke motorcycle). The Royal Enfield may have more capabilities and support extreme conditions, but it’s probably not necessary if you have to just go to the grocery across the road. The selection of the SQL implementation depends on your project requirements.

It’s easy to get started with Cloud SQL, as no software installations are required. You just log in to GCP and create an instance, which you can access using Cloud Shell, APIs, SDKs, etc. You will learn more about accessing the database when you start working with the instance in a forthcoming chapter.

This section looks at provisioning a Cloud SQL instance using Cloud Console. You’ll get started by provisioning the Cloud SQL-MySQL instance.

Provision a MySQL Instance

Navigate to the https://console.cloud.google.com/ URL and log in using your Google account. Once you’re signed in, Google will redirect to the Cloud Console.

As discussed earlier, resources belong to a project, so the first thing you need to ensure is that your project is selected.

In order to run all the exercises for Cloud SQL, there is a project named DemoCloudSQL. Select the project from the Project dropdown to get started. Figure 2-1 shows the project selected.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig1_HTML.jpg
Figure 2-1

Project selection

Next you open the left-side navigation menu and click SQL underneath the storage section, as shown in Figure 2-2.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig2_HTML.jpg
Figure 2-2

SQL selection under the Storage section

This takes you to the Cloud SQL page. You will see the Create Instance button, as shown in Figure 2-3.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig3_HTML.jpg
Figure 2-3

Cloud SQL Create Instance pop up

Note

This is the first cloud SQL instance being created, which is why no instance is listed.

Click on Create Instance and choose MySQL from the subsequent screen.

Figure 2-4 shows the database engine selection screen.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig4_HTML.jpg
Figure 2-4

Database engine selection

This will take you to the page shown in Figure 2-5, where you set up the parameters of the SQL instance.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig5_HTML.jpg
Figure 2-5

Instance creation page

You first specify the Instance ID. For this example, name the instance mysqldb.

Note

The name has to be unique within your project, which means you can have only one database with this name at a time.

Next choose a password to access the MySQL instance. The Cloud Console can automatically generate a new secure password or you can choose your own.

You also have the option to choose No Password; however, this is not a recommended option as anyone could connect to your instance with full admin privileges.

Specify your own password. Next you have to choose where, geographically, you want the instance to be located. Google has datacenters across the world, so you have various options.

For this example, choose us-central1 as the region and us-central1-a as the zone. With all details filled in, the form will look like Figure 2-6.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig6_HTML.jpg
Figure 2-6

Cloud SQL instance creation form filled in with details

Note that the ones you specify here are the basic options; there’s list of advanced options available to configure further.

You can change the options anytime later so to begin with, you will get started with the base configurations. You’ll learn more about these options in the forthcoming chapter, which discusses factors that are important for setting up the Cloud SQL instance for a production environment.

Just to have a look at the option, click Show Configuration Options near the bottom of the page. This lists all the advanced options available to you. Figure 2-7 shows the options. Before you proceed, take a brief moment to familiarize yourself with the options.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig7_HTML.jpg
Figure 2-7

Advanced configuration options

The Set Connectivity option enables you to configure access to your instance. Figure 2-8 shows the available options.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig8_HTML.jpg
Figure 2-8

Set connectivity options

You might want to change the size of the VM instance or increase the size of the associated disk, and for this you have the Configure Machine Type and Storage option. Figure 2-9 shows the available choices.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig9_HTML.jpg
Figure 2-9

Configure machine type and storage

By default, the instance uses a db-n1-standard-1 type instance and starts with 10GB SSD disk with automatic storage increase enabled.

The automatic storage increase is a must when you are using the instance for your production deployment, as the data within the database constantly increases. If it exceeds the current hard disk capacity, Google will automatically add more storage to accommodate the increased data. A limit can be specified up to which you want the increase to occur. The default value is 0, which means there’s no limit. Instance data is stored in the region where the instance resides.

Next for scheduling backups and enabling HA is the Enable Auto-Backups and High Availability option, as shown in Figure 2-10.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig10_HTML.jpg
Figure 2-10

Enable auto backups and high availability

By default, automatic backups and binary logging are selected. Automatic backups back up the data at periodic intervals and you can specify the time at which the backup should occur.

Binary logging enables point-in-time recovery and data replication. Point-in-time recovery enables you to restore or recover your data from a time in the past. This is beneficial in transactional systems where you have frequent transactions happening. This combination of backup and binary logging keeps you safe from data loss. Backup data is stored in two regions for redundancy.

The High Availability option is used to provision a fault-tolerant instance. It is not selected by default. However, if you are working on a banking application, where the application needs to be highly available to its users, then you need a cloud SQL database that’s highly available. To achieve this, you should enable High Availability.

You can also add Cloud SQL flags. These are used to adjust the database parameters and options to customize your database. Figure 2-11 shows the steps for setting the default time zone.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig11_HTML.jpg
Figure 2-11

Set the default time zone

Figure 2-12 shows the steps for specifying the log output type.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig12_HTML.jpg
Figure 2-12

Specify the log output

There are many more. These flag values are persisted as long as the instance lives.

The Set Maintenance Schedule option can be used to configure the maintenance schedule for the instance in just a few clicks. Figure 2-13 shows the options available for configuring the maintenance schedule. This is in contrast to the traditional methods, wherein prior to scheduling the maintenance, you had to take care of a complete checklist to ensure the data consistency and state.

This is a good example of the routine, time-consuming, and mundane activity that you are freed from as you move to GCP
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig13_HTML.jpg
Figure 2-13

Set maintenance schedule

Finally, there are the Labels options, which enables you to label your instance.

All the important options from a production deployment perspective are covered in a forthcoming chapter dedicated to this topic. To get started with Cloud SQL, this section moves ahead with the basic configurations and leaves the remaining ones as they are.

Click on Create to start creating the Cloud SQL instance.

The control will be redirected to the Instance List page, as shown in Figure 2-14, where the instance appears and is greyed out while it initializes and starts. It will take some time to spin up.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig14_HTML.jpg
Figure 2-14

Instance List page

After it’s done, a green tickmark will appear next to the instance name, confirming it has been created. Figure 2-15 shows the Instance List page with the newly created instance.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig15_HTML.jpg
Figure 2-15

New instance has been created

You also receive a notification in the notification pane confirming the creation, as shown in Figure 2-16.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig16_HTML.jpg
Figure 2-16

Notification pane

After the instance has been created, you can use the Instance Details page to show that the instance is all set. With every instance created, an Instance Details page is available and it enables you to monitor, manage, and work with your database instance using Cloud Console.

You can familiarize yourself a bit with the Instance Details page. Click on the instance name in the instance list. It takes you to the Instance Details page. Figure 2-17 shows all the options available on the Instance Details page.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig17_HTML.jpg
Figure 2-17

Instance Details page for mysqldb

At the top of the Instance Details page, you’ll see the Stop, Restart, Clone, and Delete options. Next to those, you also have option to Import/Export data to/from a database within the instance. The Edit option opens the configuration page with options you configured while creating the instance, thus enabling you to modify the settings.

In addition, there are various tabs available to work with the instance. For example, the Users tab manages the instance database users. Figure 2-18 shows the Users tab details.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig18_HTML.jpg
Figure 2-18

Users details for mysqldb

The Database tab lists all the databases of the instance, as shown in Figure 2-19. You can use it to manage your databases, create a new database, and delete an existing user database.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig19_HTML.jpg
Figure 2-19

Databases tab for mysqldb

Note

Since this is a fresh instance, the databases displayed are the default databases that are created by MySQL whenever a new instance in created.

You can further use the Backups tab to schedule backups or create on-demand backups, the Connections tab to manage the connectivity of the database, the Replicas tab to create read replica or a failover replica, and the Operations tab to monitor the database instance operations.

Feel free to explore the details page to familiarize yourself with it. We will come back to these later as we explore a few of the options in Chapter 4. Having covered provisioning of a MySQL instance, it’s time to look at provisioning the PostgreSQL instance.

Provisioning a PostgreSQL Instance

As with MySQL, you can select the SQL option under the Storage section. It will take you to the Cloud SQL instance list, where you should select Create Instance. Figure 2-20 lists the database engine choices.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig20_HTML.jpg
Figure 2-20

Available database engine choices

Choose PostgreSQL and click Next. It will take you to a page where you set up the parameters of the SQL instance. The page, as shown in Figure 2-21, is similar to the one you saw when creating a MySQL instance.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig21_HTML.jpg
Figure 2-21

Create PostgreSQL instance

Like with MySQL, you need to provide an instance ID. For this example, let’s name the instance test-ps. Next, choose the default user (postgres) password. Like MySQL, Cloud Console can generate a password or you can choose your own. However, unlike MySQL, there’s no option to not set the password. It’s mandatory to specify a password. The password here is set for the default user postgres, which is different from the default user root in MySQL.

Again like MySQL, you need to specify where, geographically, the instance is to be located. Use the defaults for this example—Region as us-central1 and Zone as Any.

To do advanced configuration, you can use the Show Configuration Options. All the options are the same ones that you used in the MySQL section, except for the Configure Machine Type option and setting up the HA options.

Like with MySQL, the Configure Machine Type option enables you to change the size of the VM instance for the database. However, unlike with MySQL, there are no predefined options for the instance type selection. Figure 2-22 shows the PostgreSQL Configure Machine Type options. You can set the vCPU core and memory as per your application needs.

Note

When you change the cores, memory is automatically changed to match the core settings, but you can customize it further if needed. However, you should be careful when customizing this option, as this impacts the pricing and the sustained use discounts.

../images/489070_1_En_2_Chapter/489070_1_En_2_Fig22_HTML.jpg
Figure 2-22

PostgreSQL Machine Type Configuration option

While enabling the backup, unlike MySQL, you don’t have the binary logging option available, and High Availability is configured using regional instances. With regional instances, you have persistent disks used for replication.

As mentioned, these are discussed in a later section. From a user perspective, it provides safety against primary instance failure. As with MySQL, you retain the default settings for advanced configuration. With all the details filled in, the provisioning page looks like Figure 2-23.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig23_HTML.jpg
Figure 2-23

Configured PostgreSQL new instance creation form

Click on Create to provision the PostgreSQL. It takes a few minutes to provision the machine. Figure 2-24 shows the instance list page with the PostgreSQL instance listed.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig24_HTML.jpg
Figure 2-24

Instance list page with PostgreSQL instance being provisioned

As with the MySQL instance, after it’s been provisioned, the instance will appear on the instance list page with a green tickmark indicating it’s been enabled, as shown in Figure 2-25.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig25_HTML.jpg
Figure 2-25

PostgreSQL instance provisioned and listed

Click on the instance name and review the details page. Figure 2-26 shows the PostgreSQL Instance Details page.
../images/489070_1_En_2_Chapter/489070_1_En_2_Fig26_HTML.jpg
Figure 2-26

PostgreSQL Instance Details page

As with the MySQL Instance Details page, all the options are as is. However, the content in the tabs is database specific. So, for example, the Users tab in PostgreSQL lists the default user as postgres , which is different from the root user in MySQL. The options and activities on each tab using the console are the same. For example, you can use the Database tab to create new databases and the Users tab to add new users.

Summary

This chapter introduced you to relational databases and Cloud SQL, Google’s fully managed relational database. It covered instance provisioning using Cloud Console.

The next chapter explains how to start working with the Cloud SQL (MySQL) instance using Cloud Shell and how to use the database instance to build a sample application using Python code.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.74.18