This chapter talks about using the Google Cloud Console to create projects and instances in order to build/install PostgreSQL and your applications. The chapter also covers some of the Google Could Platforms (GCP), like Compute Engine, Could Storage, and Cloud SQL, in detail. You’ll learn how to install PostgreSQL on machines created using Compute Engine and on the PostgreSQL service, which is part of Cloud SQL.
Getting Started with GCP
As with the other cloud vendors, Google Cloud has a console through which you can see all the available services and create/modify/delete services that you need.
The first step to creating an instance or a service using GCP is to sign in to its console from https://console.cloud.google.com/ . This console requires you to sign in using an existing email account with no hassles. It took me 10 seconds to access the Google Cloud Console.
Before going further with GCP, we cover why you’ll need a project in GCP.
What Is a GCP Project?
This is something interesting, as it is not something you see with most of the cloud vendors. Consider an organization that has 10 applications. It uses 300 servers for application servers/web servers and another 10 servers for databases. It may look confusing to view the console with such a huge list of servers. If you cannot segregate the servers by application on the console, managing these accounts could be difficult.
I still remember working with a console that had several hundred servers. I started to search the entire list to understand the database servers of an application that was getting retired. Decommissioning those database servers took a few days, as I had to confirm several times before deleting those instances.
GCP forces you to create a project before creating an instance. A simple solution to the previous problem is to create one project for every application and create your instances within those projects. This makes life simple. GCP allows you to create multiple projects using a single account.
Considering the example without such segregation on the console, if I have 10 applications running in my organization, I create 10 projects and create all the 300 application servers and 10 database servers within their respective projects. In this case, I don't have to scan through the entire list of instances if I have to retire an application. I can just delete the instances in that project.
Project Quota
Have you ever thought about the project quota , which is the number of projects that are allowed to be created using a single account? Yes, there is a limit and it’s 12 projects. However, Google allows you to request more projects considering a variety of factors, including resources that most legitimate customers use, the customer’s previous usage and history with Google Cloud, and previous abuse penalties. When you try to create your 13th project, you are automatically prompted to fill out a form.
You need to pay to increase your quota, but it is then available as a credit in a later billing cycle.
Creating a Project Using the Console
Once you click on Create an Empty Project , you see a rectangular box that allows you to choose a project name. Your project name can only include letters, numbers, quotes, hyphens, spaces, and exclamation points. As shown in Figure 5-2, you could see a message that says that you have 11 projects remaining in your quota. Every time you create a project, you’ll see the number of projects you have left.
Click on Create to create your first project on GCP. You should be automatically redirected to the dashboard. You can also access your dashboard with this link: https://console.cloud.google.com/home .
Deleting a Project
Types of Google Cloud Platforms
Compute Engine
Cloud Storage
Cloud SQL
Compute Engine
Compute Engine provides virtual machines running on Google data centers . As with the other cloud venders, it provides a powerful network that makes you connect your machines without any interruptions and comes with a persistence disk that persists during the crashes and delivers a consistent performance.
Key Features of Compute Engine
Global load balancing: You can load balance your incoming connections and requests across multiple instances that are created across regions. It gives maximum performance, throughput, and availability.
Linux and Windows support: You can select OSes of Linux flavors like Debian, CentOS, SUSE, Ubuntu, and Red Hat; Unix flavors like FreeBSD; and Windows flavors like Windows 2008 R2, 2012 R2, and 2016.
Compliance and security: Data written to persistent disk is encrypted on the fly and stored in encrypted form. Google Compute Engine has completed ISO 27001, SSAE-16, SOC 1, SOC 2, and SOC 3 certifications, demonstrating their commitment to information security.
Transparent maintenance: You do not need to worry about maintenance of your infrastructure selected with Compute Engine. It provides innovative data centers that are secure, migrate data without downtime, and enable proactive infrastructure maintenance such as patching OS. This transparent maintenance improves reliability and security.
Automatic discounts: If you have long-running workloads, Google automatically gives you discounted prices with no sign-up fees or up-front commitment.
Customer machine types: You can select your VMs and shape them based on how many vCPUs and GB of RAM you actually need, with increments of two vCPUs and 0.25GB at a time. By customizing your machines, you can save money and pay only for what you use.
Per-minute billing: This is an excellent feature that GCP has. Google bills in minute-level increments. After a 10-minute minimum charge, you pay only for the actual compute time that you use.
More information about Google Compute Engine is available at https://cloud.google.com/compute/docs/ .
Create an Instance
- 1.
Log in to the Google Cloud Console using your Gmail account.
- 2.
Create a project in which you want to initialize your services. Read the “How to Create a Project Using the Console” section in this chapter to create a project.
- 3.
Click on Cloud Launcher on the left side panel.
- 4.
Click on Google Cloud Platform on the left side panel.
- 5.
Click on Compute Engine in the Compute section.
- 6.You will be directed to a page where you see a couple of options—Go to Compute Engine and Take Quickstart (see Figure 5-5).
- 7.
You can click on Take Quickstart to get quick 10-minute video on this process.
- 8.Once you’re done with the quickstart, come back and select the Go To Compute Engine option. You can instead directly open the https://console.cloud.google.com/compute/instances link, which takes you to the Create Instance page, as shown in Figure 5-6.
- 9.You need to fill in the details of the instance before you create it (see Figure 5-7).
- a.
Name: Must start with a lowercase letter followed by up to 63 lowercase letters, numbers, or hyphens, and cannot end with a hyphen.
- b.
Zone: Determines what computing resources are available and where your data is stored and used.
- c.Machine Type: The type of machine that you want to create. You can choose CPU Cores and Memory for your instance. It can be upgraded later if you choose a machine with low configuration.
- d.Boot: Each instance requires a disk to boot from. Select an image or snapshot to create a new boot disk or attach an existing disk to the instance. Be sure to choose the operating system that you want, as shown in Figure 5-8.
- e.
Identity and API access: Applications running on the VM use the service account to call Google Cloud APIs. Select the service account you want to use and the level of API access you want to allow. Access Scopes is selecting the type and level of API access to grant the VM. The defaults are read-only access to storage and service management, write access to stackdriver logging and monitoring, and read/write access to service control.
- f.You can see more options on managing disks, networking, and SSH keys by clicking on the Management, Disks, Networking, SSH Keys option, as shown in Figure 5-9.
- a.
- 10.
Click on Create to create the instance.
- 11.
- 12.Connecting to this instance is easy, as Google Cloud provides its own shell to connect. The available connection options are shown in Figure 5-12.
- 13.
The following images show the process after connecting to instances using the SSH options.
How to Connect from Your Machine
Once you have added your keys, you can directly SSH to your virtual machine using this command:
Note that ipaddress is the same one you can see in the VM details.
Install PostgreSQL
Once you have created your instance, you will have a VM/machine ready for PostgreSQL installation . There are several ways to choose your PostgreSQL installation. You can use RPMs or use source installation.
One of the most easiest and most reliable ways to perform a PostgreSQL installation is through BigSQL, which is an Open Source DevOps platform designed for PostgreSQL. BigSQL binaries are portable across any Linux and Windows operating system. A user may want to install additional extensions and tools to build a complete PostgreSQL server for production. BigSQL combines a carefully selected list of extensions deployed in several PostgreSQL production environments after rigorous testing. This makes it easy for users to choose the extension they want to install using BigSQL. Then they can use very easy command-line features to install BigSQL.
- 1.
Go to https://www.openscg.com/bigsql/ .
- 2.
Click on the Downloads section.
- 3.Click on Usage Instructions, as shown in Figure 5-16.
- 4.
As per the usage instructions, for Linux machines, you can execute the following command to install the BigSQL package.
python -c "$(curl -fsSL https://s3.amazonaws.com/pgcentral/install.py)"BigSQL uses a command-line utility called pgc (pretty good command-line).
Here are some example commands.
To list the available PostgreSQL binaries and extensions for PostgreSQL, run the following command.
To install PostgreSQL 9.6, run the following command.
To install an extension called pg_repack, run this command.
You need not worry about dependencies , such as gcc compiler or any other packages that need to be installed while installing postgres or its extensions. BigSQL takes care of all the dependencies and makes it very easy for you to deal with PostgreSQL.
One of the most advanced features of BigSQL is pgDevOps. pgDevOps is a UI that allows users to install and manage PostgreSQL instances in a few clicks. Users can upgrade PostgreSQL minor version or install and update an extension in a few clicks. BigSQL also helps users generate the queries/connections metrics through PgBadger reports on its UI, as requested. Users can also tune their complex procedural language using an excellent tool embedded in its UI, called plProfiler Console . Using plProfiler Console, users can look at the complete call stack of a complex PostgreSQL function and concentrate on the code that consumed more time of the execution in its entire call stack.
Thus, BigSQL helps users install and manage PostgreSQL and its extensions in a few clicks. BigSQL, combined with a cloud service, can easily build a very economic PostgreSQL database on the cloud.
Google Cloud Storage
Google Cloud Storage (GCS) offers object storage that’s simple, secure, durable, and highly available. It can be used by developers and IT organizations. GCS’s simple capacity pricing is highly effective across all storage classes with no minimum fee. It’s a pay for what you use model.
Storage Classes
Multi-regional storage
Regional storage
Nearline storage
Coldline storage
Multi-regional storage is a redundant storage model across geographical locations and it has the highest level of availability and performance. It is ideal for low-latency, high QPS content serving to users distributed across geographic regions.
Regional storage is only for a single region and provides the same level of availability and performance as multi-regional. It is ideal for compute, analytics, and Machine Learning (ML) workloads in a particular region.
Nearline is designed for data that you do not want to access frequently. So, it is useful infrequently accessed data.
Coldline is designed for cold data, such as archive and disaster recovery.
In addition, with lifecycle management , Google Cloud storage allows you to reduce your costs even further by moving your objects to Nearline and Coldline, and through scheduled deletions. Google Cloud Storage stores and replicates your data, thereby allowing a high level of persistence, and all the data is encrypted both in-flight and at rest.
Key Features of GCS
Google Cloud Storage is almost infinitely scalable. It can support applications irrespective of whether they are small or large or in a multi-exabyte system.
All four storage classes that we talked about offer very high availability. Multi-regional storage offers 99.95% monthly availability in its Service Level Agreement. Regional storage offers 99.9% availability, and the Nearline and Coldline storage classes offer 99%.
Like very few venders, Google Cloud Storage is designed for 99.999999999% durability. This is because it stores multiple copies redundantly across multiple disks, racks, power and network failure domains, with automatic checksums to ensure data integrity. As we already discussed, with the Multi-Regional storage class, data is also geo-redundant across multiple regions and locations
GCS has very consistent data and guarantees that when a write succeeds, the latest copy of the object will be returned to any GET, globally (applies to PUTs, new or overwritten objects, and DELETEs).
More information about Google Cloud Storage is available at https://cloud.google.com/storage/docs/ .
We cover this storage option more in Chapter 8, “Backups on the Cloud.”
Cloud SQL
Google Cloud SQL is a fully managed database service from GCP. Like other cloud vendors for RDBMS, Google Cloud SQL makes it easy to set up, manage, and maintain relation databases. It makes the administrator’s job easy.
Cloud for MySQL
Cloud for PostgreSQL (beta)
Let’s talk about the PostgreSQL service.
Cloud for PostgreSQL
This service was introduced recently and is still in its beta version. However, still you can choose this for your POCs and test your applications. As this is still in beta, some important features when compared to other cloud vendors for PostgreSQL are not available. They will likely be introduced in later releases. Due to storage security and durability, you can still consider your applications deployed on Cloud SQL. This product might change in backward-incompatible ways and is not subject to any SLA or deprecation policy. Let’s look at key features of this service.
The latest version of PostgreSQL 9.6 is available in the cloud, which is fully managed.
You can choose machine types according to your application demand. Custom machine types with up to 208GB of RAM and 32 CPUs are available.
You can create and manage instances in the Google Cloud Platform Console just like with other cloud vendors.
Instances are available in US, EU, and Asia.
You do not need worry about storage in the case of large applications. Up to 10TB of storage is available, with the ability to automatically increase storage as needed.
There is more security for your data as customer data is encrypted on Google’s internal networks and in database tables, temporary files, and backups.
Has support for secure external connections using Cloud SQL Proxy or SSL protocol.
Support for PostgreSQL client-server protocol and standard PostgreSQL connectors.
You can import and export databases using SQL dump files.
Backup are automated and you can have on-demand backups too.
Monitoring and logging are available.
Replication
High-availability configuration
Point-in-time recovery (PITR)
Import/export in CSV format
PostGIS
Data type extensions like btree_gin, btree_gist, cube, hstore, etc.
Language extensions like plpgsql
Miscellaneous extensions like pg_buffercache, pgcrypto, tablefunc, etc.
For a complete list, visit this link: https://cloud.google.com/sql/docs/postgres/extensions .
Cloud SQL for PostgreSQL supports the PL/pgSQL SQL procedural language.
Support for languages, in terms of front-end or application languages for an application that is going to connect PostgreSQL, is robust. Without a cloud (or a on-premises database), you can use applications written in several languages. Just like on-premise, Cloud SQL provides a flexible environment for applications that are written in Java, Python, PHP, Node.js, Go, and Ruby. You can also use Cloud SQL for PostgreSQL with external applications using the standard PostgreSQL client-server protocol .
A psql client: https://cloud.google.com/sql/docs/postgres/connect-admin-ip
Third-party tools that use the standard PostgreSQL client-server protocol
External applications: https://cloud.google.com/sql/docs/postgres/connect-external-app
Google App Engine applications: https://cloud.google.com/sql/docs/postgres/connect-app-engine
Applications running on Google Compute Engine: https://cloud.google.com/sql/docs/postgres/connect-compute-engine
Applications running on Google Container Engine: https://cloud.google.com/sql/docs/postgres/connect-container-engine
Connecting from Google Cloud functions or by using Private Google access is not supported.
You cannot have SUPERUSER privileges for your users. However, an exception to this rule is made for the CREATE EXTENSION statement, but only for supported extensions.
It has custom background workers.
The psql client in the Cloud Shell does not support operations that require a reconnection, such as connecting to a different database using the c command.
Some PostgreSQL options and parameters are not enabled for editing as Cloud SQL flags. Google advises: “If you need to update a flag that is not enabled for editing, start a thread on the Cloud SQL Discussion group.”
Create a PostgreSQL Instance Using Cloud SQL
- 1.
Log in to the Google Cloud Console using your Gmail account.
- 2.
Create a project in which you want to initialize your services. Look at the “How to Create a Project Using the Console” section earlier in this chapter to create a project.
- 3.
Click on Cloud Launcher on the left side panel.
- 4.
Click on Google Cloud Platform on the left side panel.
- 5.
Click on Cloud SQL in the Storage section.
- 6.You will be directed to a page where you will see a couple of options —Go to Cloud SQL and Take Quickstart. See Figure 5-18.
- 7.If you want to look at the documentations/pricing/APIs related information, click on Take Quickstart. Click on PostgreSQL, as highlighted in Figure 5-19.
- 8.Once you’re done with the quickstart, you select the Go to Cloud SQL option. You can also directly open the https://console.cloud.google.com/sql/instances link, which takes you to the create instance page shown in Figure 5-20.
- 9.Select PostgreSQL Beta on the left side panel and click on Next. See Figure 5-21.
- 10.The creation instance window in shown in Figure 5-22. Set the following options:
Instance ID: The name of your instance, which you can use as a tag in the future.
Default user password: The password of your postgres user. You can create your own password or you can generate a random password by clicking on Generate.
Location: Shows the region and zone where you want your instance. If you choose a region near your location, you will get better performance.
Machine type: Determines the virtualized hardware resources available to your instance, such as memory, virtual cores, and persistent disk limits. This choice affects billing. Constraints on dedicated core machine types are that memory must be at least 3.75GB, memory must be a multiple of 0.25GB, vCPU count must be one or even, and memory per vCPU must be between 0.9GB and 6.5GB per vCPU, inclusive. Some zones do not support machine types with 32 vCPUs.
Network throughput (MB/s): The maximum amount of data that can be delivered over a connection to your instance. This includes reads/writes of your data (disk throughput) as well as the content of queries, calculations, and other data not stored on your database.
Storage type: A permanent option. Select SSD or HDD.
Storage capacity: Cannot be decreased later. So choose capacity wisely.
- If you want auto-scalable machines in terms of storage, select the Enable Automatic Storage Increases option. Whenever it reaches the threshold, it increases the storage.
- Enable auto backups: Enables auto backups of your databases. You can schedule a time for your backups. As they affect performance, it is recommended to schedule backups during your off-peak hours. See Figure 5-24.
- Authorize Networks: Add IP4 addresses that you want to allow to connect to the database. You can provide a particular IP or a range of IPs. See Figure 5-25.
- Add Cloud SQL flags: You can add parameters that you want to change. Currently it allows only a few parameters to change (see Figure 5-26). The full list is available at https://cloud.google.com/sql/docs/postgres/flags .
- Set maintenance schedule: Allows you to set a time for the database maintenance activities. The instance will automatically restart to apply updates during a one-hour maintenance window. Updates happen once every few months. Choose a window, or leave it as Any window and Cloud SQL will pick a day and hour. See Figure 5-27.
Once you have set all the PostgreSQL instance options the way you want them, click on Create to create the instance.
Summary
This chapter explained the Google Cloud Console (GCC) and how to start using it. You learned how to create projects in GCC and how to create instances using GCC under Google Cloud Platforms like Compute Engine. We also covered PostgreSQL installation on instances. This chapter covered cloud storage, Cloud SQL, and the PostgreSQL service under Cloud SQL. We hope that it helps you start your applications on Google Cloud. The next chapter covers Microsoft Azure, including how to start it and the services available. It also covers how to create virtual machines and focuses on the PostgreSQL service that was introduced recently.