In this chapter, we will learn how to develop PostgREST. It is a standalone web server that automatically generates usable RESTful APIs from any PostgreSQL databases directly. The API endpoints and operations are implemented by the permissions and structural constraints of the PostgreSQL database. PostgREST serves as an automatic solution to the manual CRUD (Create, Read, Update, and Delete) querying.
The project in this chapter will use PostgREST and Docker to set up a RESTful API for banking transactions to send GET, POST, PUT, and DELETE curl requests into the PostgreSQL 12 Relational Database Service (RDS) referred to in Chapter 2, Setting Up a PostgreSQL RDS for ATM Machines, from Amazon Web Services (AWS) to create, retrieve, update, and delete ATM machine locations within a typical city.
The following topics will be covered in the chapter:
- Introduction to PostgREST
- Creating a PostgREST API schema on an RDS
- Executing PostgREST
- Adding a trusted user
- Creating a PostgREST token
- PostgREST administration
- PostgREST on TimescaleDB
Technical requirements
This chapter will take developers around 10-12 hours of working to develop PostgREST.
The figures and code files of this chapter are available at the following link: https://github.com/PacktPublishing/Developing-Modern-Database-Applications-with-PostgreSQL/tree/master/Chapter06.
Introduction to PostgREST
In this section, we are going to practice Docker installation and will learn how to start Docker as a service. We will download the PostgREST image from Docker to start the first launching of PostgREST.
Using Docker
Docker is a tool that uses containers to package an application with all of its libraries and other dependencies and ship it all out as one package:
- We will PuTTY into our ec2 instance with the built-in user for our current AMI, that is, the centos user, with the help of the following command:
[centos@ip-172-31-95-213 ~]$ sudo su
[root@ip-172-31-95-213 centos]# cd /usr/local/src/
- You must check whether Docker is installed. If it is not installed, then you can get it installed with the help of the following command:
[root@ip-172-31-95-213 src]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@ip-172-31-95-213 src]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@ip-172-31-95-213 src]# yum install docker-ce
- During the installation, type y (=Yes) on the terminal if you are asked any questions, as shown in Figure 6.1:
- Add your user to the docker group with the help of the following command:
[root@ip-172-31-95-213 src]# usermod -aG docker $(whoami)
- Set Docker to start automatically at boot time by typing the following command:
[root@ip-172-31-95-213 src]# systemctl enable docker.service
- Finally, start the Docker service by running the following command:
[root@ip-172-31-95-213 src]# systemctl start docker.service
In order to run Docker, you have to start the Docker service or the Docker daemon as we have just learned how to start it in the previous statement.
Installing standalone PostgREST
After we have started the Docker daemon, we can run Docker commands without daemon connection errors:
- Let's pull and start the PostgREST image from Docker as shown here:
[root@ip-172-31-95-213 src]# docker run --name tutorial -p 5432:5432 -e POSTGRES_PASSWORD=mysecretpassword -d postgres
This will run the Docker instance as a daemon and expose port 5432 to the host system so that it looks like an ordinary PostgreSQL server to the rest of the system, as shown in Figure 6.2:
- Next, we will connect to the SQL console (psql) inside the container with the help of the following command:
[root@ip-172-31-95-213 src]# docker exec -it tutorial psql -U postgres
psql (11.5 (Debian 11.5-1.pgdg90+1))
Type "help" for help.
postgres=#
Once the preceding command is executed, we will see the psql command prompt. You can start to create your database schema and tables here so that later on you will have the PostgREST API connecting to these local databases.
Standalone PostgREST can deploy the following two features:
- TimescaleDB: An open source database for time series data that provides automatic partitioning across time and space for PostgreSQL
- PostgREST Test Suite: A unit test library that provides a sample test PostgreSQL database with creation scripts to deploy a sample testing PostgREST service onto that test database, and finally supplying auto-testing scripts to check that your testing PostgREST service can have good memory usage and to send sample input/output to make sure that your testing PostgREST service works well
Next, we will switch away from standalone PostgREST as we will have to test PostgREST on AWS in order to deploy our banking application.
Creating a PostgREST API schema on an RDS (AWS)
We will install PostgREST on a CentOS ec2 instance, and then we will set up PostgREST to create an API for our PostgreSQL version RDS on the AWS cloud:
- Let's install PostgREST from the binary release by using the command shown in the following code block:
[root@ip-172-31-95-213 src]# yum install postgresql-libs
Please answer y (=Yes) when being asked,
[root@ip-172-31-95-213 src]# wget https://github.com/PostgREST/postgrest/releases/download/v6.0.1/postgrest-v6.0.1-centos7.tar.xz
[root@ip-172-31-95-213 src]# tar xfJ postgrest-v6.0.1-centos7.tar.xz
[root@ip-172-31-95-213 src]# mv postgrest /usr/local/bin/
[root@ip-172-31-95-213 src]# rm postgrest-v6.0.1-centos7.tar.xz
- We will now define a user role that has permission to access the RDS ATM database and to read records from the ATM locations table, as shown:
create role web_anon nologin; grant web_anon to dba; grant usage on schema public to web_anon; grant select on public."ATM locations" to web_anon;
After executing the preceding query, we will get the following result as shown in Figure 6.3:
For PostgREST to connect with our RDS on the AWS cloud, we should remember the name assigned for the new role. In the example demonstrated here, we have assigned the name web_anon to the new role.
Executing PostgREST
We are now putting together what we have set up until now, the database schema, the user role, and the PostgreSQL version 12 connection of the RDS, into PostgREST in the following steps:
- We will create the PostgREST configuration file as shown and will name it tutorial.conf:
[root@ip-172-31-95-213 src]# mkdir postgrest
[root@ip-172-31-95-213 src]# cd postgrest
[root@ip-172-31-95-213 postgrest]# vi tutorial.conf
db-uri = "postgres://dba:[email protected]/atm" db-schema = "public" db-anon-role = "web_anon"
- Now, we will call PostgREST to create an API on our RDS with the help of the following command:
[root@ip-172-31-95-213 postgrest]# postgrest tutorial.conf
PostgREST will start listening on port 3000 after it is connected to the RDS, hence the automatic API on the ATM database has been established. After you run the preceding command, your console will look as shown in the following screenshot:
- Open a new terminal, leaving the current terminal but continuing to keep PostgREST running inside that one so that we have two terminals at the same time. Try doing an HTTP request for the ATM locations as shown:
[root@ip-172-31-95-213 centos]# curl http://localhost:3000/ATM%20locations
The space character is replaced by %20 inside the curl statement. Please observe the curl result in Figure 6.5:
- With the permissions allotted to the current role, anonymous requests have read-only access to the ATM locations table as shown in the following code block. If we try to add a new ATM, we will not be able to do so:
[root@ip-172-31-95-213 centos]# curl http://localhost:3000/ATM%20locations -X POST -H "Content-Type: application/json" -d '{"BankName":"Test Bank","Address":"99 Test way","County":"New York","City":"New York","State":"NY","ZipCode":10271}'
Response is 401 Unauthorized:
{"hint":null,"details":null,"code":"42501","message":"permission denied for table ATM locations"}
Now we have established a basic API on the database by PostgREST. As you see, the POST request to insert a new ATM location has failed because web_anon was granted select privileges only.
Adding a trusted user
In order to perform more data manipulation on PostgREST, we will have to employ user roles with all privileges granted on the ATM locations table:
- First, we will execute the following query using pgAdmin to create a new role:
create role atm_user nologin;
grant atm_user to dba;
grant usage on schema public to atm_user;
grant all on public."ATM locations" to atm_user;
grant usage, select on sequence public."ATM locations_ID_seq" to atm_user;
On execution of the preceding script, the result will be as shown in Figure 6.6:
- Next, we will set a password and provide it to PostgREST. You can use the following statements to generate a random password or you can think out a nice password as well; this password must be at least 32 characters long:
[root@ip-172-31-95-213 postgrest]# export LC_CTYPE=C
[root@ip-172-31-95-213 postgrest]# < /dev/urandom tr -dc A-Za-z0-9 | head -c32 DFZ49GQGubpzcSbt3t2uMIiBF6pU4PJ8
[root@ip-172-31-95-213 postgrest]#
/dev/urandom is a random character generator for Linux and while < /dev/urandom generates a random string, the tr statement is extracting from that random string the first 32 characters within A-Za-z0-9 by the head statement.
- We will add this generated password to the tutorial.conf file as shown:
jwt-secret = "<the password you made>"
On the console, the preceding command will look as shown in Figure 6.7:
- Because we still kept our previous terminal with the PostgREST server running inside, we will now restart it to load the updated configuration file (please press Ctrl + C to stop the server, then start PostgREST again) as shown in Figure 6.8:
Now, whoever intends to use our PostgREST API will have to insert the correct password to issue insertion requests from the previously created user.
Creating a PostgREST token
In order to make efficient usage of the trusted user and the password of PostgREST, we will combine these two values into a token so that any API requests carrying that token will be accepted by PostgREST as the correct password and correct trusted user:
We'll get started with creating the token using the following steps:
- We will first open https://jwt.io on the browser and fill in the required details as shown:
- Password: DFZ49GQGubpzcSbt3t2uMIiBF6pU4PJ8
- JSON: "role": "atm_user"
- Copy the result token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYXRtX3VzZXIifQ.ZL8hsLj5cewZvgb81EXm5vC3Jpn_4TzpszwMBStu-xo
You can see the preceding values are entered in Figure 6.9.
- We will now write the following command to be able to use the encoded token to add a new ATM location, on the other terminal (not the one that PostgREST is running inside):
[centos@ip-172-31-95-213 ~]$ export TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYXRtX3VzZXIifQ.ZL8hsLj5cewZvgb81EXm5vC3Jpn_4TzpszwMBStu-xo"
[centos@ip-172-31-95-213 ~]$ curl http://localhost:3000/ATM%20locations -X POST -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" -d '{"BankName":"Test Bank","Address":"99 Test way","County":"New York","City":"New York","State":"NY","ZipCode":10271}'
Because you authorize your POST request with the correct token as shown previously, when PostgREST receives your request, it assumes that your request connects by the correct password using atm_user, who has all the privileges on the ATM locations table. After the preceding command is executed, the console will have no errors, as shown in Figure 6.10:
- After that, we see the new test ATM location by pgAdmin as in Figure 6.11:
- We create a token that will expire after 5 minutes instead of the forever alive token previously. At first, we calculate the epoch value of 5 minutes with the help of the following command:
select extract(epoch from now() + '5 minutes'::interval) :: integer;
The result of the preceding select statement will be the epoch value as shown in Figure 6.12:
As shown in the preceding screenshot, this epoch value is 1566880492.
- Now, go back to https://jwt.io and change the payload as shown in the following code block:
{
"role": "atm_user",
"exp": 1566880492
}
After the preceding lines are executed, we will get a new token that will expire after 5 minutes as shown in Figure 6.13:
- As shown in the following code block, when we access the new token after the 5 minutes, an expiration message will show and PostgREST returns an HTTP 401 Unauthorized error:
[centos@ip-172-31-95-213 ~]$ export TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYXRtX3VzZXIiLCJleHAiOjE1NjY4ODA0OTJ9.Imw7-Mbxejo9SCL0FGfAG0lR166fZ6ujhpEIlqvvlig"
[centos@ip-172-31-95-213 ~]$ curl http://localhost:3000/ATM%20locations -X POST -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" -d '{"BankName":"Test Bank 2","Address":"100 Test way","County":"New York","City":"New York","State":"NY","ZipCode":10272}'
{"message":"JWT expired"}
[centos@ip-172-31-95-213 ~]$
- In addition to the expiration tokens, now we will demonstrate how to block a specific user by their email, for example, [email protected]. We will make the payload JSON as shown:
{
"role": "atm_user",
"email": "[email protected]"
}
Once the preceding lines are executed, we obtain the following token:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYXRtX3VzZXIiLCJlbWFpbCI6Im5vdC5nb29kQG15cG9zdGdyZXN0LmNvbSJ9.Alz9Wm7oQ4igcZA9mr-OjgwPJ_d3PisvmKAnb29xLMQ
Therefore, the preceding token is considered a bad token for an ATM user with the forbidden [email protected] email. Whenever there are any new requests using the preceding token, they should be the requests from the bad user that we are trying to block.
- Next, we will create the following function by pgAdmin. This function will receive the authorized token of a new request, then find out the user email linked to that token. If the extracted email was the bad one, then we will raise an exception to block the user request:
create schema auth;
grant usage on schema auth to web_anon, atm_user;
create or replace function auth.check_token() returns void
language plpgsql
as $$
begin
if current_setting('request.jwt.claim.email', true) = '[email protected]' then
raise insufficient_privilege using hint = 'Nope, we are on to you';
end if;
end $$;
- Next, we will stop the PostgREST server by pressing Ctrl + C on the console, and then update the tutorial.conf file and specify the new function as shown:
pre-request = "auth.check_token"
An actual view of the preceding command on the console is shown in Figure 6.14:
- Then we restart PostgREST and call the tutorial.conf file as shown:
[root@ip-172-31-95-213 postgrest]# postgrest tutorial.conf
- Then, as shown, if now we use the blocked token from the specific user to insert, we will get a 403 Forbidden error because that user is not allowed:
[centos@ip-172-31-95-213 ~]$ export BADTOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYXRtX3VzZXIiLCJlbWFpbCI6Im5vdC5nb29kQG15cG9zdGdyZXN0LmNvbSJ9.Alz9Wm7oQ4igcZA9mr-OjgwPJ_d3PisvmKAnb29xLMQ"
[centos@ip-172-31-95-213 ~]$ curl http://localhost:3000/ATM%20locations -X POST -H "Authorization: Bearer $BADTOKEN" -H "Content-Type: application/json" -d '{"BankName":"Test Bank 2","Address":"100 Test way","County":"New York","City":"New York","State":"NY","ZipCode":10272}'
{"hint":"Nope, we are on to you","details":null,"code":"42501","message":"insufficient_privilege"}
[centos@ip-172-31-95-213 ~]$
The previous pre-request = "auth.check_token" setting has invoked the check_token function to execute this function before PostgREST can accept the insertion. Because the token that is sent to PostgREST in this case includes the [email protected] email, the insufficient_privilege exception is then raised to block this POST request.
PostgREST administration
We will now set up PostgREST so that whenever we start our ec2 instance, the API service will automatically start:
- We will configure PostgREST as a service with the help of the following command:
[root@ip-172-31-95-213 postgrest]# mkdir /etc/postgrest
[root@ip-172-31-95-213 postgrest]# vi /etc/postgrest/config
------------------------------------------
db-uri = "postgres://dba:[email protected]/atm"
db-schema = "public"
db-anon-role = "web_anon"
db-pool = 10
server-host = "127.0.0.1"
server-port = 3000
jwt-secret = "DFZ49GQGubpzcSbt3t2uMIiBF6pU4PJ8"
------------------------------------------
After the preceding command is executed, the console will look as shown in Figure 6.15:
- Then create the systemd service file by using the following command:
[root@ip-172-31-95-213 postgrest]# ln -s /usr/local/bin/postgrest /bin/postgrest
[root@ip-172-31-95-213 postgrest]# vi /etc/systemd/system/postgrest.service
------------------------------------------
[Unit]
Description=REST API for any Postgres database
After=postgresql.service
[Service]
ExecStart=/bin/postgrest /etc/postgrest/config
ExecReload=/bin/kill -SIGUSR1 $MAINPID
[Install]
WantedBy=multi-user.target
------------------------------------------
After the preceding command is executed, the console will look as shown in Figure 6.16:
- Now we can enable the postgrest service at boot time and start it with the following:
[root@ip-172-31-95-213 postgrest]# systemctl enable postgrest
Created symlink from /etc/systemd/system/multi-user.target.wants/postgrest.service to /etc/systemd/system/postgrest.service.
[root@ip-172-31-95-213 postgrest]# systemctl start postgrest
From now on, whenever the CentOS ec2 instance starts, then the API is also running on port 3000.
PostgREST on TimescaleDB
TimescaleDB is an open source database for time series data; we first heard about TimescaleDB when we investigated standalone PostgREST because it had TimescaleDB as a built-in extension. Obviously, PostgREST is also able to create a timing API for TimescaleDB.
Unfortunately, the PostgreSQL RDS does not support TimescaleDB. If we would like to install TimescaleDB for AWS, we will have to use the recently invented Timescale Cloud to connect to AWS. Timescale Cloud can support TimescaleDB for AWS:
- Please visit the following link to sign up for a new user account at Timescale Cloud: https://www.timescale.com/cloud-signup.
- Please enter your full name, email address, and password for Timescale Cloud:
- Please press the Create an account button, which will lead you to the Email confirmed screen:
- Please find the Timescale Cloud registration email from your inbox and open the verification link supplied inside that email. You will see the acceptance of your new Timescale account as shown here:
- After that, you can visit the Timescale Cloud portal and log in with your username and password. You can also use the following link: http://portal.timescale.cloud/.
- Please click on the + Create a new service button:
- On the next screen, please select the TimescaleDB service for PostgreSQL 12 for step 1:
In step 2, please select AWS to connect to TimescaleDB.
- For step 3, we are selecting the network zone named timescale-aws-us-east-1 in the United States:
- In step 4, we can select a development plan for our deployment:
- Please review the service details for the last time, and then click on the Create Service button:
- Timescale Cloud will bring you back to the services screen; you will have to wait a few minutes until your new service is ready:
- Please click on the name of the postgresql-book service highlighted in the preceding figure. You will reach the overview page of the service details, as shown here:
Please copy the following TimescaleDB details:
- Database Name: defaultdb
- Host: postgresql-book-postgresql-1728.a.timescaledb.io
- Port: 20153
- User: tsdbadmin
- Please launch pgAdmin and set up a new connection to your new TimescaleDB service. We right-click on Servers and then select Create | Server...:
- Please enter a new name for your pgAdmin server, and then select the Connection tab:
- Please fill in your values copied from Timescale Cloud here:
We have the following parameters here:
- Host name/address: postgresql-book-postgresql-1728.a.timescaledb.io
- Port: 20153
- Maintenance database: defaultdb
- Username: tsdbadmin
- Password: Copied from Timescale Cloud
- Please click the Save button, then we can expand the new ATM-TimescaleDB server as follows:
- Because TimescaleDB is a PostgreSQL extension for timing data – the kind of data that will flexibly change over time – we can choose a banking topic of exchange rates. The exchange rates are changing every hour and this kind of data from banking transactions can be used very well to illustrate timing in TimescaleDB.
Therefore, please open the following GitHub link to create new tables for banking exchange rates of currencies: https://github.com/lequanha/PostgreSQL-12-Development-and-Administration-Projects/blob/master/Chapter 6/FX-Schema.sql.
- Please copy the SQL script from the preceding GitHub link as in the following screenshot:
- We select the expected location as ATM-TimescaleDB | Databases | defaultdb | Schemas | public, then please paste the preceding SQL script into the Query Editor of your pgAdmin:
Finally, press the (Execute/Refresh (F5)) icon to create our new tables.
- At first, we should convert the ExchangeRates table into a hypertable with just timing partitioning on the time column. Then, we add an additional partition key on the currencycode column with three partitions with the following SQL script:
SELECT create_hypertable('"ExchangeRates"', 'time');
SELECT add_dimension('"ExchangeRates"', 'currencycode', number_partitions => 3);
Hypertables in TimescaleDB are designed similarly to PostgreSQL tables. Developers prefer to use hypertables to improve the read and write performance of the database. In comparison to a traditional table, a big data hypertable will be able to perform up to 15 times quicker, while a small data hypertable will still insert, update, delete, and select data at the same speed, causing no additional overhead at all.
- We can copy the preceding script into the Query Editor of pgAdmin:
- Please open the following GitHub link that contains the SQL data script for our currencies: https://github.com/lequanha/PostgreSQL-12-Development-and-Administration-Projects/blob/master/Chapter 6/Currencies.sql.
We can paste the script from GitHub into the Query Editor of pgAdmin to execute as follows:
- We are going to insert exchange rates from 1 USD to CAD, 1 USD to GBP, and 1 USD to EUR. The exchange rates are stored every hour for the first 6 months from January to the end of June 2020.
Please use the following GitHub link: https://github.com/lequanha/PostgreSQL-12-Development-and-Administration-Projects/blob/master/Chapter 6/USDCAD_Jan2020_Jun2020.sql.
When you open this link, you will see the INSERT script of 4,368 records of USD-CAD exchange rates:
We requested usage permission for all of our data from Dukascopy and they have granted their acceptance to use their data for this book from here: https://www.dukascopy.com/swiss/english/marketwatch/historical/.
- The preceding GitHub INSERT script can be copied into pgAdmin to execute as follows:
- Similarly, we next import USD-GBP exchange rates and USD-EUR rates for the first 6 months of 2020 from the following two GitHub links:
- https://github.com/lequanha/PostgreSQL-12-Development-and-Administration-Projects/blob/master/Chapter 6/USDGBP_Jan2020_Jun2020.sql
- https://github.com/lequanha/PostgreSQL-12-Development-and-Administration-Projects/blob/master/Chapter 6/USDEUR_Jan2020_Jun2020.sql
- Please select the ExchangeRates table on the left panel, and then press the (View Data) icon on the toolbar to display the final hourly data of all USD-CAD, USD-GBP, and USD-EUR rates inside our TimescaleDB now:
- Please use this timing SELECT query to filter weekly exchange rates from our hourly table as follows:
SELECT time_bucket('1 week', time) as period,
last(closingrate, time) AS last_closing_rate,
currencycode
FROM "ExchangeRates"
GROUP BY period, currencycode
ORDER BY period DESC;
This is the screenshot from pgAdmin:
Hence, for the latest week on Monday, June 29th, 1 USD was equal to 1.357 CAD, 0.891 EUR, or 0.808 GBP. The previous week before that, on Monday, June 22nd, 1 USD was equal to 1.368 CAD, 0.891 EUR, or 0.810 GBP, and more.
- If we would like to find out the first day of each exchange rate inside our TimescaleDB, we can use the min function of time data as follows:
SELECT ci.currencycode, min(c.time)
FROM "Currencies" ci JOIN "ExchangeRates" c ON ci.currencycode = c.currencycode
GROUP BY ci.currencycode
ORDER BY min(c.time) DESC;
Please copy the preceding SELECT query into pgAdmin:
Hence, all of the three types of USD-CAD, USD-EUR, and USD-GBP exchange rates start in our TimescaleDB from January 1st, 2020.
- Now please practice the creation of a new view to filter daily exchange rates from our original hourly data, as follows:
CREATE VIEW ExchangeRates_Daily
WITH (timescaledb.continuous)
AS
SELECT
time_bucket('1 day', time) AS period,
first(openingrate, time) AS openingrate,
max(highestrate) AS highestrate,
min(lowestrate) AS lowestrate,
last(closingrate, time) AS closingrate,
currencycode
FROM "ExchangeRates"
GROUP BY period, currencycode;
The data frequency is 1 day for the daily filter. There are the first timing functions to get the earliest exchange rate within one day, while the last function will select the latest exchange rate of that day:
- Please select the view on the left panel by going to Schemas | public | Views | exchangerates_daily, and then press the (View Data) icon on the toolbar to display the daily data of exchange rates:
Hence, on New Year's Day, January 1st, at the opening of the day, 1 USD could exchange for 1.299 CAD, 0.892 EUR, or 0.754 GBP; at the closing of the same day, 1 USD could exchange for 1.299 CAD, 0.891 EUR, or 0.755 GBP. Similarly, for January 2nd at closing time, 1 USD converted to less than 1.300 CAD, 0.895 EUR, or 0.761 GBP, and so on.
- In order to check the status of the view materialization job, please use this TimescaleDB query:
SELECT * FROM timescaledb_information.continuous_aggregate_stats;
The result is shown:
It means that the last time the ExchangeRates_Daily view was materialized was at 01:02 a.m. on July 22nd, completing after 9 milliseconds (which is very fast!).
Summary
In this chapter, we have implemented PostgREST step by step to create an automatic API service for PostgreSQL version 12. By working through this chapter, developers have also practiced Docker usage, Docker and SQL, PostgREST first execution, database users and roles, and JSON Web Token. We also learned how to run PostgREST as a service.
The second part of the chapter focused on introducing PostgreSQL timing data with TimescaleDB from Timescale Cloud, a new extension related to PostgREST. This chapter concludes the PostgreSQL development part of the book.
In the next chapters, we will focus on PostgreSQL administration.