© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
A. MarkelovCertified OpenStack Administrator Study Guide Certification Study Companion Serieshttps://doi.org/10.1007/978-1-4842-8804-7_7

7. OpenStack Compute

Andrey Markelov1  
(1)
Stockholm, Sweden
 

This chapter covers 35% of the Certified OpenStack Administrator exam requirements. OpenStack Compute is the heart of OpenStack, so you must study this chapter to pass the exam without excuses.

Nova’s Architecture and Components

Nova, OpenStack’s compute service, is the heart of the OpenStack cloud. Its main goal is to manage basic virtual machines functions like creating, starting, stopping, and so on. Let’s look at the architecture and general parts of Nova. As with other services, Nova uses a message broker and database. By default the database is MariaDB, and the message broker is RabbitMQ. The main services that support Nova are.
  • nova-api is a service that receives REST API calls from other services and clients and responds to them.

  • nova-scheduler is Nova’s scheduling service. It takes requests for starting instances from the queue and selects a compute node for running a virtual machine on it. The selection of a hypervisor is based on its weight and filters. Filters can include an amount of memory, a requested availability zone, and a set of group hosts. The rules apply each time the instance is started or when migrating to another hypervisor.

  • nova-conductor is the proxy service between the database and the nova-compute services. It helps with horizontal scalability.

  • nova-compute is the main part of an IaaS system. This daemon usually runs only on compute nodes. Its role is to rule a hypervisor through its specific API. It is designed to manage pools of computer resources and can work with widely available virtualization technologies.

  • placement-api is a REST API stack and data model used to track resource provider inventories, usages, and different classes of resources. It was introduced in the Newton release of OpenStack.

  • nova-nonvncproxy and nova-spicehtml5proxy are services providing access to the instances console through remote access VNC and SPICE protocols.

Figures 7-1 and 7-2 illustrate the process of starting an instance.

A workflow chart explains the process to start an instance.

Figure 7-1

Instance provision workflow—Part I

A photograph represents two workflow charts to start an instance.

Figure 7-2

Instance provision workflow—Part II

In this example, two hosts are used: compute host, which acts as the hypervisor when the nova-compute service is running, and controller node, with all its management services. The following describes the workflow for starting the instance.
  1. 1.

    The client (in this particular example, the client is a Horizon web client, but it can be the openstack CLI command) asks keystone-api for authentication and generates the access token.

     
  2. 2.

    If authentication succeeds, the client sends a request for a running instance to nova-api. It is similar to the openstack server create command.

     
  3. 3.

    Nova validates the token and receives headers with roles and permissions from keystone-api.

     
  4. 4.

    Nova checks the database for conflicts with existing names of objects and creates a new entry for this instance in its database.

     
  5. 5.

    nova-api sends the RPC for a scheduling instance to the nova-scheduler service.

     
  6. 6.

    The nova-scheduler service picks up the request from the message queue.

     
  7. 7.

    The nova-scheduler service queries resources from the placement service.

     
  8. 8.

    Placement returns a list of available resources to nova-scheduler.

     
  9. 9.

    nova-scheduler uses filters and weights to build a list of nodes to target. Then scheduler sends the RPC call to the nova-compute service to launch the virtual machine.

     
  10. 10.

    The nova-compute service picks up the request from the message queue.

     
  11. 11.

    The nova-compute service asks nova-conductor to fetch information about the instance, for example, host ID and flavor.

     
  12. 12.

    The nova-conductor service picks up the request from the message queue.

     
  13. 13.

    The nova-conductor service gets information about an instance from the database.

     
  14. 14.

    The nova-compute takes the instance information from the queue. The compute host now knows which image is used to start the instance. nova-compute asks the glance-api service for a particular image URL.

     
  15. 15.

    glance-api validates the token and returns the image’s metadata, including the URL.

     
  16. 16.

    The nova-compute service passes a token to neutron-api and asks it to configure the network for the instance.

     
  17. 17.

    Neutron validates the token and configures the network.

     
  18. 18.

    nova-compute interacts with cinder-api to attach the volume to the instance.

     
  19. 19.

    nova-compute generates data for the hypervisor and executes the request via libvirt.

     
Now let’s look at Nova’s main configuration file, /etc/nova/nova.conf. Table 7-1 lists the options available.
Table 7-1

Main Configuration Options in /etc/nova/nova.conf

Examples of Config Options

Description

[DEFAULT]

my_ip = 192.168.122.10

Management interface IP address of the controller node

[DEFAULT]

enabled_apis = osapi_compute,metadata

Enables support for the compute service and metadata APIs

[api]

auth_strategy = keystone

[keystone_authtoken]

www_authenticate_uri = http://192.168.122.10:5000/

auth_url = http://192.168.122.10:5000/

memcached_servers = 192.168.122.10:11211

auth_type = password

project_domain_name = Default

user_domain_name = Default

project_name = service

username = nova

password = password

Authentication parameters: endpoints and other parameters like default project name, domain name, project name for services, and account information for Nova user

[api_database]

connection=mysql+pymysql://nova_api:[email protected]/nova_api

[database]

connection=mysql+pymysql://nova:[email protected]/nova

Connection strings used to connect to Nova’s databases

[DEFAULT]

transport_url = rabbit://openstack:[email protected]:5672/

RabbitMQ broker address, port, user name, and password

[vnc]

enabled = true

vncserver_listen = $my_ip

vncserver_proxyclient_address = $my_ip

Management interface IP address of the VNC proxy

[glance]

api_servers=192.168.122.10:9292

[placement]

region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://192.168.122.10:5000/v3

username = placement

password = password

Location of the Image Service API

Configuration for access to the Placement service

Managing Flavors

Instance flavor is a virtual machine template that describes the main parameters. It is also known as an instance type. Immediately after installing the OpenStack cloud, you can choose several predefined flavors. You can also add new flavors and delete existing ones. Use the following command to list the flavors.
$ openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
| 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
| 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
| 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
| 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
+----+-----------+-------+------+-----------+-------+-----------+
As you can see in the listing, some flavors were created during OpenStack's installation. In some of distributions list will be empty out of box. To list the details of the flavor use next command. As you can see m1.tiny has one vCPU and 512 Mb of memory.
$ openstack flavor show m1.tiny
+----------------------------+---------+
| Field                      | Value   |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled   | False   |
| OS-FLV-EXT-DATA:ephemeral  | 0       |
| access_project_ids         | None    |
| description                | None    |
| disk                       | 1       |
| id                         | 1       |
| name                       | m1.tiny |
| os-flavor-access:is_public | True    |
| properties                 |         |
| ram                        | 512     |
| rxtx_factor                | 1.0     |
| swap                       |         |
| vcpus                      | 1       |
+----------------------------+---------+
By default, only the admin can list all the flavors and create new ones. Here is an example of the creation of a new publicly available flavor.
$ source keystonerc_admin
$ openstack flavor create --public --ram 400 --disk 3 --vcpus 1 m10.tiny
+----------------------------+---------+
| Field                      | Value   |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled   | False   |
| OS-FLV-EXT-DATA:ephemeral  | 0       |
| access_project_ids         | None    |
| description                | None    |
| disk                       | 1       |
| id                         | 1       |
| name                       | m1.tiny |
| os-flavor-access:is_public | True    |
| properties                 |         |
| ram                        | 512     |
| rxtx_factor                | 1.0     |
| swap                       |         |
| vcpus                      | 1       |
+----------------------------+---------+
In this example, a new flavor was created with the name m10.tiny that has a 3 GB disk, 400 MB RAM, and 1 vCPU. You can delete the flavor with the following command.
$ openstack flavor delete m10.tiny
For managing flavors in Horizon, go to Admin ➤ Compute ➤ Flavors (see Figure 7-3).

A screenshot of the OpenStack dashboard depicts the flavors.

Figure 7-3

Managing flavors in Horizon

Managing and Accessing an Instance Using a Key Pair

Before launching instances, you should know how to work with OpenSSH key pairs. Access to virtual machines with OpenSSH key-based authentication is essential for using GNU/Linux in the cloud computing environment.

SSH (Secure Shell) allows you to authenticate users by using the private-public key pair. You should generate two linked cryptographic keys: public and private. The public key can be given to anyone. Your private key should be kept in a secure place—it is only yours. An instance running the OpenSSH server with your public key can issue a challenge that can only be answered by the system holding your private key. As a result, it can be authenticated through the presence of your key. This allows you to access a virtual machine in a way that does not require passwords.

OpenStack can store public keys and put them inside the instance when it is started. It is your responsibility to keep the private key secured. If you lose the key, you can’t recover it. In that case, you should remove the public key from your OpenStack cloud and generate a new key pair. If somebody steals a private key, they can get access to your instances.

Tip

In a GNU/Linux system, public keys are stored in the ∼/.ssh/authorized_keys file.

Let’s start by creating a key pair. The corresponding command is.
$ openstack keypair create apresskey1 > key1
You create a key pair with this command. The private key is stored in the key1 file in your workstation.
$ cat key1
-----BEGIN RSA PRIVATE KEY-----
FliElAoNnAoKvQaELyeHnPaLwb8KlpnIC65PunAsRz5FsoBZ8VbnYhD76DON/BDVT
...
gdYjBM1CqqmUw54HkMJp8DLcYmBP+CRTwia9iSyY42Zw7eAi/QTIbQ574d8=
-----END RSA PRIVATE KEY-----
A public key is stored in your OpenStack cloud and is ready to use. You can check the list of public keys accessible to you with the following command.
$ openstack keypair list
+------------+-------------------------------------------------+------+
| Name       | Fingerprint                                     | Type |
+------------+-------------------------------------------------+------+
| apresskey1 | 1a:29:52:3c:19:cc:9d:61:c4:f1:98:03:02:85:b3:40 | ssh  |
+------------+-------------------------------------------------+------+
Before an SSH client can use a private key, you should make sure that the file has the correct GNU/Linux permissions.
$ chmod 600 key1
$ ls -l key1
-rw------- 1 andrey andrey 1676 Jul 20 15:44 key1
If you want to create and delete key pairs in Horizon, go to Project ➤ Compute ➤ Key Pairs (see Figure 7-4).

A screenshot of the OpenStack dashboard depicts the key pairs.

Figure 7-4

Managing key pairs in Horizon

When your instance runs and has a floating IP, you can connect to it with a similar command.
$ ssh -i key1 [email protected]

Option -i points to your private key. The next section explains how to run an instance and inject a public key into it.

Launching, Shutting Down, and Terminating the Instance

If you have only one network in the project, you need at least three parameters to start an instance: the name of an instance, the flavor, and the source of an instance. The instance source can be an image, snapshot, or block storage volume. At boot time, you can specify optional parameters, like key pairs, security groups, user data files, and volume for persistent storage. There are two networks.
$ openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID                                   | Name     | Subnets                              |
+--------------------------------------+----------+--------------------------------------+
| 5ee4e933-de9b-4bcb-9422-83cc0d276d33 | demo-net | 18736455-80f6-4513-9d81-6cedbfe271fe |
| 5f18929b-70f6-4729-ac05-7bea494b9c5a | ext-net  | d065c027-bb60-4464-9619-7d9754535c5c |
+--------------------------------------+----------+--------------------------------------+
In this case, you need to specify the network. You can try the following example.
$ openstack server create --image cirros-0.5.2-x86_64 --flavor m1.tiny --network demo-net --security-group apress-sgroup --key-name apresskey1 apressinst3
+-----------------------------+-------------------------------------------+
| Field                       | Value                                     |
+-----------------------------+-------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                    |
| OS-EXT-AZ:availability_zone |                                           |
| OS-EXT-STS:power_state      | NOSTATE                                   |
| OS-EXT-STS:task_state       | scheduling                                |
| OS-EXT-STS:vm_state         | building                                  |
| OS-SRV-USG:launched_at      | None                                      |
| OS-SRV-USG:terminated_at    | None                                      |
| accessIPv4                  |                                           |
| accessIPv6                  |                                           |
| addresses                   |                                           |
| adminPass                   | Gp4VWJ7ZYpk7                              |
| config_drive                |                                           |
| created                     | 2022-07-20T14:50:34Z                      |
| flavor                      | m1.tiny (1)                               |
| hostId                      |                                           |
| id                          | e69b0560-0441-4ee4-b97c-35a05bc833c2      |
| image                       | cirros-0.5.2-x86_64 (7ffe1b43-7e86-4ad0-86b6-9fffa38b3c20)    |
| key_name                    | apresskey1                                |
| name                        | apressinst3                               |
| progress                    | 0                                         |
| project_id                  | 9e0c535c2240405b989afa450681df18          |
| properties                  |                                           |
| security_groups             | name='7748dc9f-1573-4225-a51e-8fc6328aafc0'                                        |
| status                      | BUILD                                     |
| updated                     | 2022-07-20T14:50:34Z                      |
| user_id                     | a20b5a5995b740ff90034297335b330a          |
| volumes_attached            |                                           |
+-----------------------------+-------------------------------------------+
This example tried to run the instance with the name apressinst3 by the flavor m1.tiny from an image named cirros-0.5.2-x86_64. You also specified the security group named apress-sgroup and the keypair apresskey1. To check the current state of the instances available, use the following command.
$ openstack server list
+--------------------------------------+-------------+--------+-----------------------+---------------------+---------+
| ID                                   | Name        | Status | Networks              | Image               | Flavor  |
+--------------------------------------+-------------+--------+-----------------------+---------------------+---------+
| e69b0560-0441-4ee4-b97c-35a05bc833c2 | apressinst3 | ACTIVE | demo-net=172.16.0.125 | cirros-0.5.2-x86_64 | m1.tiny |
+---------+-------------+--------+-----------------------+---------------------+---------+
You may want to connect to the instance console in your browser by the noVNC client, which is the VNC client using HTML5 with encryption support. To get the URL, use the following command.
$ openstack console url show apressinst3
+----------+-----------------------------------------------------------+
| Field    | Value                                                     |
+----------+-----------------------------------------------------------+
| protocol | vnc                                                       |
| type     | novnc                                                     |
| url      | http://192.168.122.10:6080/vnc_auto.html?path=%3Ftoken%3D622f9d39-e362-4a1f-b280-467eac740155 |
+----------+-----------------------------------------------------------+
If you put the URL in your browser’s address bar, you can connect to the machine, as shown in Figure 7-5.

A screenshot depicts the console of the running instance.

Figure 7-5

Example of the console of running instance in a browser

Tip

If you got an error by using PackStack installation, check that the hostname of your OpenStack server is present in /etc/hosts.

If you prefer to work with instances in GUI, you can use the Horizon web interface. For that, go to Project ➤ Compute ➤ Instances. An example of the series of launch dialogs is shown in Figure 7-6. Walk through them by using the Next button.

A screenshot depicts the launch instance dialog window.

Figure 7-6

Example of a launch instance dialog window

If there is an error, you may see something like the following.
$ openstack server list
+--------------------------------------+-------------+----------+---------------------+--------+------------+
| ID                                   | Name        | Status   | Networks | Image               | Flavor    |
+--------------------------------------+-------------+----------+--------------------+---------+------------+
| c9831978-a84c-4df4-8c45-005c533fbf8b | apressinst3 | ERROR    |         | cirros-0.5.2-x86_64 | m1.xlarge |
+--------------------------------------+-------------+----------+---------------------+--------+------------+
To get detailed information about the instance, you can run the following command.
$ openstack server show apressinst3
+-----------------------------+-------------------------------------------+
| Field                       | Value                                     |
+-----------------------------+-------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                    |
| OS-EXT-AZ:availability_zone |                                           |
| OS-EXT-STS:power_state      | NOSTATE                                   |
| OS-EXT-STS:task_state       | None                                      |
| OS-EXT-STS:vm_state         | error                                     |
| OS-SRV-USG:launched_at      | None                                      |
| OS-SRV-USG:terminated_at    | None                                      |
| accessIPv4                  |                                           |
| accessIPv6                  |                                           |
| addresses                   |                                           |
| config_drive                |                                           |
| created                     | 2022-07-21T12:20:43Z                      |
| fault                       | {'code': 500, 'created': '2022-07-21T12:20:43Z', 'message': 'No valid host was found. '}                  |
| flavor                      | m1.xlarge (5)                             |
| hostId                      |                                           |
| id                          | c9831978-a84c-4df4-8c45-005c533fbf8b      |
| image                       | cirros-0.5.2-x86_64 (7ffe1b43-7e86-4ad0-86b6-9fffa38b3c20)                   |
| key_name                    | apresskey1                                |
| name                        | apressinst3                               |
| project_id                  | 9e0c535c2240405b989afa450681df18          |
| properties                  |                                           |
| status                      | ERROR                                     |
| updated                     | 2022-07-21T12:20:43Z                      |
| user_id                     | a20b5a5995b740ff90034297335b330a          |
| volumes_attached            |                                           |
+-----------------------------+-------------------------------------------+
The following is the command to start this instance.
$ openstack server create --image cirros-0.5.2-x86_64 --flavor m1.xlarge --network demo-net --security-group apress-sgroup --key-name apresskey1 apressinst3

From the output, it is easy to see that there is no room to put such a big instance within flavor m1.xlarge. Flavor m1.xlarge requires 16 GB of RAM.

The next command completely deletes this instance.
$ openstack server delete apressinst3
If you need to reboot your virtual machine, use the following command.
$ openstack server reboot apressinst3
For a hard reset of the server, you can add the --hard option. You may stop and start an instance if needed.
$ openstack server stop apressinst3
$ openstack server list
+--------------------------------------+-------------+---------+----------------------+---------------------+---------+
| ID                                   | Name        | Status  | Networks             | Image               | Flavor  |
+--------------------------------------+-------------+---------+----------------------+---------------------+---------+
| b360f5a5-b528-4f77-bdc7-3676ffcf0dff | apressinst3 | SHUTOFF | demo-net=172.16.0.48 | cirros-0.5.2-x86_64 | m1.tiny |
+--------------------------------------+-------------+---------+----------------------+---------------------+---------+
$ openstack server start apressinst3
$ openstack server list
+--------------------------------------+-------------+--------+----------------------+---------------------+---------+
| ID                                   | Name        | Status | Networks             | Image               | Flavor  |
+--------------------------------------+-------------+--------+----------------------+---------------------+---------+
| b360f5a5-b528-4f77-bdc7-3676ffcf0dff | apressinst3 | ACTIVE | demo-net=172.16.0.48 | cirros-0.5.2-x86_64 | m1.tiny |
+--------------------------------------+-------------+--------+----------------------+---------------------+---------+

Managing Instance Snapshots

OpenStack can create snapshots of instances, even if a virtual machine is running. In this case, the user must keep the data consistent. It is important to know that snapshot is not an instance recovery point. Snapshot is the same as a regular Glance image. You can start a new virtual machine from the snapshot of another virtual machine.

Let’s check whether there is at least one image in Glance and one instance.
$ openstack image list
+--------------------------------------+---------------------+--------+
| ID                                   | Name                | Status |
+--------------------------------------+---------------------+--------+
| 7ffe1b43-7e86-4ad0-86b6-9fffa38b3c20 | cirros-0.5.2-x86_64 | active |
+--------------------------------------+---------------------+--------+
And there is at least one running server.
$ openstack server list
+--------------------------------------+-------------+--------+----------------------+---------------------+---------+
| ID                                   | Name        | Status | Networks             | Image               | Flavor  |
+--------------------------------------+-------------+--------+----------------------+---------------------+---------+
| b360f5a5-b528-4f77-bdc7-3676ffcf0dff | apressinst3 | ACTIVE | demo-net=172.16.0.48 | cirros-0.5.2-x86_64 | m1.tiny |
+--------------------------------------+-------------+--------+----------------------+---------------------+---------+
Now you can create a snapshot from a running instance.
$ openstack server image create --name apressinst3_snap apressinst3
And after that, you can list the available images.
$ openstack image list
+--------------------------------------+---------------------+--------+
| ID                                   | Name                | Status |
+--------------------------------------+---------------------+--------+
| 81d0b487-6384-4759-82b2-f0cfff075897 | apressinst3_snap    | active |
| 7ffe1b43-7e86-4ad0-86b6-9fffa38b3c20 | cirros-0.5.2-x86_64 | active |
+--------------------------------------+---------------------+--------+
As you can see, a snapshot was added to the list. You are free to create a new instance from this snapshot.
$ openstack server create --image apressinst3_snap --flavor m1.tiny --network demo-net --security-group apress-sgroup --key-name apresskey1 apressinst_snap
+-----------------------------+-------------------------------------------+
| Field                       | Value                                     |
+-----------------------------+-------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                    |
| OS-EXT-AZ:availability_zone |                                           |
| OS-EXT-STS:power_state      | NOSTATE                                   |
| OS-EXT-STS:task_state       | scheduling                                |
| OS-EXT-STS:vm_state         | building                                  |
| OS-SRV-USG:launched_at      | None                                      |
| OS-SRV-USG:terminated_at    | None                                      |
| accessIPv4                  |                                           |
| accessIPv6                  |                                           |
| addresses                   |                                           |
| adminPass                   | 6ZtJ2okTBG28                              |
| config_drive                |                                           |
| created                     | 2022-07-21T13:22:29Z                      |
| flavor                      | m1.tiny (1)                               |
| hostId                      |                                           |
| id                          | 51ee0c05-242f-41a5-ba20-b10dc4621fdb      |
| image                       | apressinst3_snap (81d0b487-6384-4759-82b2-f0cfff075897)                        |
| key_name                    | apresskey1                                |
| name                        | apressinst_snap                           |
| progress                    | 0                                         |
| project_id                  | 9e0c535c2240405b989afa450681df18          |
| properties                  |                                           |
| security_groups             | name='7748dc9f-1573-4225-a51e-8fc6328aafc0' |
| status                      | BUILD                                     |
| updated                     | 2022-07-21T13:22:29Z                      |
| user_id                     | a20b5a5995b740ff90034297335b330a          |
| volumes_attached            |                                           |
+-----------------------------+-------------------------------------------+
You can use Horizon to create snapshots of instances, as shown in the dialog in Figure 7-7.

A screenshot represents the pop-up menu to create a snapshot.

Figure 7-7

Example of snapshot creation

Managing Quotas

A quota limits the number of available resources. The default number of resources allowed per tenant is defined in the main configuration file: /etc/nova/nova.conf. Here is an example.
[quota]
# Quota options allow to manage quotas in openstack deployment.
# The number of instances allowed per project.
# Minimum value: -1
instances=10
# The number of instance cores or vCPUs allowed per project.
# Minimum value: -1
cores=20
# The number of megabytes of instance RAM allowed per project.
# Minimum value: -1
ram=51200
# The number of metadata items allowed per instance.
# Minimum value: -1
metadata_items=128
# The number of injected files allowed.
# Minimum value: -1
injected_files=5
# The number of bytes allowed per injected file.
# Minimum value: -1
injected_file_content_bytes=10240
# The maximum allowed injected file path length.
# Minimum value: -1
injected_file_path_length=255
# The maximum number of key pairs allowed per user.
# Minimum value: -1
key_pairs=100
# The maximum number of server groups per project.
# Minimum value: -1
server_groups=10
# The maximum number of servers per server group.
# Minimum value: -1
server_group_members=10
The admin can retrieve the default number of resources allowed per project with the openstack quota list command. Here is an example.
$ openstack quota list --compute --detail
+-----------------------------+--------+----------+-------+
| Resource                    | In Use | Reserved | Limit |
+-----------------------------+--------+----------+-------+
| cores                       |      0 |        0 |    20 |
| fixed_ips                   |      0 |        0 |    -1 |
| floating_ips                |      0 |        0 |    -1 |
| injected_file_content_bytes |      0 |        0 | 10240 |
| injected_file_path_bytes    |      0 |        0 |   255 |
| injected_files              |      0 |        0 |     5 |
| instances                   |      0 |        0 |    10 |
| key_pairs                   |      0 |        0 |   100 |
| metadata_items              |      0 |        0 |   128 |
| ram                         |      0 |        0 | 51200 |
| security_group_rules        |      0 |        0 |    -1 |
| security_groups             |      0 |        0 |    -1 |
| server_group_members        |      0 |        0 |    10 |
| server_groups               |      0 |        0 |    10 |
+-----------------------------+--------+----------+-------+
Users can see a part of the current quotas in a graphical view on the project’s Overview page, as shown in Figure 7-8.

A screenshot of an OpenStack dashboard depicts the overview of the current quota status.

Figure 7-8

User’s overview of the current quota status

Admins can manage quotas on a per-project basis in Horizon by going to Identity ➤ Projects ➤ Modify Quotas and accessing the drop-down menu to the right of the project’s name.

Getting Nova Stats

First, let’s grab the list of all hypervisors.
$ openstack hypervisor list
+----+---------------------+-----------------+----------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP        | State |
+----+---------------------+-----------------+----------------+-------+
|  1 | rdo.test.local      | QEMU            | 192.168.122.10 | up    |
+----+---------------------+-----------------+----------------+-------
To get a summary of resource usage of all instances running on the host, use the following command.
$ openstack hypervisor show rdo.test.local
+----------------------+-------------------------------------+
| Field                | Value                               |
+----------------------+-------------------------------------+
| current_workload     | 0                                   |
| aggregates           | []                                  |
| cpu_info             | arch='x86_64', features='['smep',.. |
| disk_available_least | 49                                  |
| free_disk_gb         | 57                                  |
| free_ram_mb          | 30527                               |
| host_ip              | 192.168.122.10                      |
| host_time            | 16:02:08                            |
| hypervisor_hostname  | rdo.test.local                      |
| hypervisor_type      | QEMU                                |
| hypervisor_version   | 7000000                             |
| id                   | 1                                   |
| load_average         | 0.69, 0.57, 0.45                    |
| local_gb             | 59                                  |
| local_gb_used        | 2                                   |
| memory_mb            | 32063                               |
| memory_mb_used       | 1536                                |
| running_vms          | 2                                   |
| service_host         | rdo.test.local                      |
| service_id           | 8                                   |
| state                | up                                  |
| status               | enabled                             |
| uptime               | 1:50                                |
| users                | 1                                   |
| vcpus                | 8                                   |
| vcpus_used           | 2                                   |
+----------------------+-------------------------------------+
You can use the --all-projects option to search as admin for all virtual machines in all projects.
$ openstack server list --all-projects
+--------------------------------------+-----------------+--------+----------------------+---------------------+---------+
| ID                                   | Name            | Status | Networks             | Image               | Flavor  |
+--------------------------------------+-----------------+--------+----------------------+---------------------+---------+
| 51ee0c05-242f-41a5-ba20-b10dc4621fdb | apressinst_snap | ACTIVE | demo-net=172.16.0.25 | apressinst3_snap    | m1.tiny |
| b360f5a5-b528-4f77-bdc7-3676ffcf0dff | apressinst3     | ACTIVE | demo-net=172.16.0.48 | cirros-0.5.2-x86_64 | m1.tiny |
+--------------------------------------+-----------------+--------+----------------------+---------------------+---------+
And as admin, you can see an overall picture of all the hypervisors in Horizon, as shown in Figure 7-9.

A screenshot of an OpenStack dashboard depicts the hypervisor summary on the right.

Figure 7-9

Example of the hypervisors’ summary picture

For some low-level operations, you may want to use the old nova command. If needed, you can easily get diagnostic information about any instance.
$ nova diagnostics 51ee0c05-242f-41a5-ba20-b10dc4621fdb
+----------------+----------------------------------------------+
| Property       | Value                                        |
+----------------+----------------------------------------------+
| config_drive   | False                                        |
| cpu_details    | [{"id": 0, "time": 13890000000, "utilisation": null}]                        |
| disk_details   | [{"read_bytes": 29513728, "read_requests": 5022, "write_bytes": 450560,                 |
|                | "write_requests": 56, "errors_count": -1}]   |
| driver         | libvirt                                      |
| hypervisor     | qemu                                         |
| hypervisor_os  | linux                                        |
| memory_details | {"maximum": 0, "used": 0}                    |
| nic_details    | [{"mac_address": "fa:16:3e:19:3d:f9", "rx_octets": 11097, "rx_errors": 0,          |
|                | "rx_drop": 0, "rx_packets": 123, "rx_rate": null, "tx_octets": 9746,                     |
|                | "tx_errors": 0, "tx_drop": 0, "tx_packets": 83, "tx_rate": null}]                        |
| num_cpus       | 1                                            |
| num_disks      | 1                                            |
| num_nics       | 1                                            |
| state          | running                                      |
| uptime         | 66626                                        |
+----------------+----------------------------------------------+
And at the end, you can get a summary of the statistics for each tenant.
$ nova usage-list
Usage from 2022-06-24 to 2022-07-23:
+-----------+---------+---------------+-----------+----------------+
| Tenant ID                        | Servers | RAM MiB-Hours | CPU Hours | Disk GiB-Hours |
+-----------+---------+---------------+-----------+----------------+
| 9e0c535c2240405b989afa450681df18 | 3       | 30456.67      | 59.49     | 59.49          |
+-----------+---------+---------------+-----------+----------------+

Verifying the Operation and Managing Nova Compute Servers

You can check whether all Nova servers are started and active by using the systemctl command.
# systemctl status *nova* -n 0
● openstack-nova-conductor.service - OpenStack Nova Conductor Server
     Loaded: loaded (/usr/lib/systemd/system/openstack-nova-conductor.service; enabled; vendor preset: disabled)
     Active: active (running) since Fri 2022-07-22 08:52:33 CEST; 1h 2min ago
   Main PID: 4085 (nova-conductor)
      Tasks: 19 (limit: 204820)
     Memory: 385.9M
        CPU: 56.945s
     CGroup: /system.slice/openstack-nova-conductor.service
             ├─4085 /usr/bin/python3 /usr/bin/nova-conductor
             ├─4790 /usr/bin/python3 /usr/bin/nova-conductor
             ├─4791 /usr/bin/python3 /usr/bin/nova-conductor
             ├─4792 /usr/bin/python3 /usr/bin/nova-conductor
             ├─4793 /usr/bin/python3 /usr/bin/nova-conductor
             ├─4794 /usr/bin/python3 /usr/bin/nova-conductor
             ├─4796 /usr/bin/python3 /usr/bin/nova-conductor
             ├─4797 /usr/bin/python3 /usr/bin/nova-conductor
             └─4798 /usr/bin/python3 /usr/bin/nova-conductor
● openstack-nova-scheduler.service - OpenStack Nova Scheduler Server
     Loaded: loaded (/usr/lib/systemd/system/openstack-nova-scheduler.service; enabled; vendor preset: disabled)
     Active: active (running) since Fri 2022-07-22 08:52:33 CEST; 1h 2min ago
   Main PID: 4086 (nova-scheduler)
      Tasks: 5 (limit: 204820)
     Memory: 197.1M
        CPU: 37.323s
     CGroup: /system.slice/openstack-nova-scheduler.service
             ├─4086 /usr/bin/python3 /usr/bin/nova-scheduler
             ├─4773 /usr/bin/python3 /usr/bin/nova-scheduler
             ├─4774 /usr/bin/python3 /usr/bin/nova-scheduler
             ├─4775 /usr/bin/python3 /usr/bin/nova-scheduler
             └─4776 /usr/bin/python3 /usr/bin/nova-scheduler
● openstack-nova-compute.service - OpenStack Nova Compute Server
     Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled)
     Active: active (running) since Fri 2022-07-22 08:52:45 CEST; 1h 4min ago
   Main PID: 1408 (nova-compute)
      Tasks: 25 (limit: 204820)
     Memory: 151.3M
        CPU: 35.221s
     CGroup: /system.slice/openstack-nova-compute.service
             └─1408 /usr/bin/python3 /usr/bin/nova-compute
● openstack-nova-novncproxy.service - OpenStack Nova NoVNC Proxy Server
     Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled; vendor preset: disabled)
     Active: active (running) since Fri 2022-07-22 08:50:28 CEST; 1h 6min ago
   Main PID: 1410 (nova-novncproxy)
      Tasks: 1 (limit: 204820)
     Memory: 99.7M
        CPU: 2.940s
     CGroup: /system.slice/openstack-nova-novncproxy.service
             └─1410 /usr/bin/python3 /usr/bin/nova-novncproxy --web /usr/share/novnc/

As you can see in this example, all services are running on the same host. In the production environment, all are running on the control nodes except nova-compute and nova-compute, which are running on the compute nodes.

According to OpenStack documentation, although the compute and metadata APIs can be run using independent scripts that provide Eventlet-based HTTP servers, it is generally considered more performant and flexible to run them using a generic HTTP server that supports WSGI (such as Apache or Nginx). In particular PackStack example, web-server Apache is used as WSGI service. You can find Apache configs for WSGI services by looking at the /etc/httpd/conf.d/ directory.
# ls /etc/httpd/conf.d/*wsgi*
/etc/httpd/conf.d/10-aodh_wsgi.conf     /etc/httpd/conf.d/10-keystone_wsgi.conf  /etc/httpd/conf.d/10-nova_metadata_wsgi.conf
/etc/httpd/conf.d/10-gnocchi_wsgi.conf  /etc/httpd/conf.d/10-nova_api_wsgi.conf  /etc/httpd/conf.d/10-placement_wsgi.conf
Let’s check for the presence of the Nova service in the Keystone services catalog.
$ source keystonerc_admin
$ openstack service show nova
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute Service        |
| enabled     | True                             |
| id          | 44cb0eddaae5494f83d07bb48278eed6 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+
For troubleshooting, you may also need to know where the nova endpoint is.
$ openstack endpoint list | grep nova
| 0b36f879db4647568c29579f1347d386 | RegionOne | nova         | compute        | True    | public    | http://192.168.122.10:8774/v2.1                  |
| 551894b71129448eb9efc934f7d1a374 | RegionOne | nova         | compute        | True    | internal  | http://192.168.122.10:8774/v2.1                  |
| c1a044e51e794cf09e672a7ec29619fd | RegionOne | nova         | compute        | True    | admin     | http://192.168.122.10:8774/v2.1                  |

The Nova service listens for incoming connections at the 192.168.122.10 IP address and port 8774.

You may also want to check Nova’s log files. With the help of the lsof command, you can enumerate the log files and services that are using it.
# lsof /var/log/nova/*
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF     NODE NAME
nova-comp 1408 nova    3w   REG  253,0   763552 34858285 /var/log/nova/nova-compute.log
nova-novn 1410 nova    3w   REG  253,0    16350 35522927 /var/log/nova/nova-novncproxy.log
nova-cond 4085 nova    3w   REG  253,0  1603975 35839626 /var/log/nova/nova-conductor.log
nova-sche 4086 nova    3w   REG  253,0  1612316 35839628 /var/log/nova/nova-scheduler.log
httpd     4209 nova    9w   REG  253,0   490304 35839631 /var/log/nova/nova-api.log
...
httpd     4217 nova    9w   REG  253,0    16360 34619037 /var/log/nova/nova-metadata-api.log
...

Summary

OpenStack Compute is the heart of OpenStack. You cannot avoid this topic, so this chapter should be studied thoroughly. Compute topic is quite significant since it is the main goal of OpenStack to run compute resources.

The next chapter covers OpenStack’s object storage.

Review Questions

  1. 1.
    Which service acts as a proxy service between the database and nova-compute services?
    1. A.

      nova-conductor

       
    2. B.

      nova-nonvncproxy

       
    3. C.

      nova-api

       
    4. D.

      nova-scheduler

       
     
  2. 2.
    Which adds a new flavor named m5.tiny that has a 5 GB disk, 2 vCPU, and 500 MB RAM?
    1. A.

      nova flavor-create --is-public true m5.tiny auto 500 2 5

       
    2. B.

      openstack flavor create --public --ram 500 --disk 5 --cpus 2 m5.tiny

       
    3. C.

      openstack flavor create --public --ram 500 --disk 5 --vcpus 2 m5.tiny

       
    4. D.

      openstack flavor-create --public --ram 500 --disk 5 --vcpus 2 m5.tiny

       
     
  3. 3.
    Which GNU/Linux permissions should be applied to the private SSH key?
    1. A.

      640

       
    2. B.

      660

       
    3. C.

      600

       
    4. D.

      620

       
     
  4. 4.
    Which lets a regular user get Nova quotes for the project?
    1. A.

      nova quota-list

       
    2. B.

      openstack quota show

       
    3. C.

      nova show-quota

       
    4. D.

      openstack quota show --all

       
     

Answers

  1. 1.

    A

     
  2. 2.

    C

     
  3. 3.

    C

     
  4. 4.

    B

     
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.132.123