Walking through the generated files

The files generated by JHipster are organized by application. That is, each application will have its own folder and the files related to that service will be present inside it.

We will start with the store gateway application. There will be three generated files: the store-service.yml, store-mysql.yml, and store-deployment.yml files.

The following is the store-service.yml file:

apiVersion: v1
kind: Service
metadata:
name: store
namespace: jhipster
labels:
app: store
spec:
selector:
app: store
type: LoadBalancer
ports:
- name: http
port: 8080

The first line defines the API version of Kubernetes we want to target, followed by the kind of template or object that this template carries. This template has a service defined in it. 

Then, we have the metadata information. Kubernetes uses this metadata information to group certain services together. In the metadata, we can define the following:

  • The service name
  • The namespace the object belongs to
  • The labels, which are key and value pairs

Then, we have the spec. The spec in the Kubernetes object will provide the state of the service. In the spec, we can define the number of replicas we need. We also have the selector, within which we specify the deployment with identifiers (we will see the deployment spec soon). We also specify the type of service, followed by the ports in which the application should run. This is similar to the Dockerfile, so we are exposing the 8080 port for the gateway service.

Then, we have the store-mysql.yml file, where we have defined our MySQL server for the store application. The difference here is that the service spec points to store-mysql, which is defined in the same file and is exposed on port 3306:

apiVersion: v1
kind: Service
metadata:
name: store-mysql
namespace: jhipster
spec:
selector:
app: store-mysql
ports:
- port: 3306

In the store-mysql app declaration, as shown in the next snippet, we have specified the database and environment properties that are needed for our application to run. Here, the kind is mentioned as Deployment. The job of the deployment object is to change the state of the services to the state that is defined in the deployment object.

Here, we have defined a single replica of the MySQL server, followed by the spec where we have mentioned the version of MySQL that we need (the container).

When it comes to databases, it is often preferable to use external database services rather than having the database in Kubernetes to reduce complexity.

This is then followed by the environment where we have the username, password, and then the database schema. We also have the volume information with volume mounts for persistent storage.

We can also define a spec inside a spec object (as shown in the following code):

apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
replicas: 1
selector:
matchLabels:
app: store-mysql
template:
metadata:
labels:
app: store-mysql
spec:
...
containers:
- name: mysql
image: mysql:8.0.18
env:
...
args:
...
ports:
- containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql/

Similarly, we have store-deployment.yml, in which we have defined the store gateway application and its environment properties, along with the other details such as initialization containers, ports, resource limits, probes, and so on:

apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
replicas: 1
selector:
...
template:
...
spec:
initContainers:
- name: init-ds
image: busybox:latest
command: # This waits for DB to be ready
...
containers:
- name: store-app
image: deepu105/store
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
...
- name: JHIPSTER_REGISTRY_PASSWORD
valueFrom:
secretKeyRef:
name: registry-secret
key: registry-admin-password
...
resources:
requests:
...
limits:
...
ports:
...
readinessProbe:
...
livenessProbe:
...

A similar approach is used for both the invoice and notification services. You can find them in their respective folders.

In JHipster-registry, alongside Service and Deployment, we have defined a Secret and a StatefulSet.

The secret is used to handle passwords. It will be an opaque type and the password is Base64-encoded.

Then, we have StatefulSet, which is similar to a pod except it has a sticky identity. Pods are dynamic in nature; these pods have a persistent identifier that is maintained throughout. It makes sense for a registry server to be defined as StatefulSet since it is essential that the registry server should be identified by a persistent identifier. This enables all services to connect to that single endpoint and get the necessary information. If the registry server is down, then communication between the services will also have problems since the services connect to other services via the registry server.

There are various options that can be set for the controller, which are as follows:

  • Replica set: This provides a replica of pods at any time with selectors.
  • Replica controller: This provides a replica of pods without any selectors.
  • StatefulSet: This makes the pod unique by providing it with a persistent identifier.
  • DaemonSet: This provides a copy of the pod that is going to be run.

The JHipster Registry is configured in a cluster with high availability. The UI access to the JHipster Registry is also restricted to the cluster for better security. 

Similarly, configuration files are generated for the JHipster Console, and they are placed in a jhipster-console.yml folder where the JHipster Console is also defined.

The JHipster Console runs on an Elastic (ELK) Stack, so we need Elasticsearch, which is defined in jhipster-elasticsearch.yml, followed by Logstash in the jhipster-logstash.yml file. 

Commit the generated files to Git.

Now, let's see how we can deploy this.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.21.158.148