This chapter covers 35% of the Certified OpenStack Administrator exam requirements. OpenStack Compute is the heart of OpenStack, so you must study this chapter to pass the exam without excuses.
Nova’s Architecture and Components
Nova, OpenStack’s compute service, is the heart of the OpenStack cloud. Its main goal is to manage basic virtual machines functions like creating, starting, stopping, and so on. Let’s look at the architecture and general parts of Nova. As with other services, Nova uses a message broker and database. By default the database is MariaDB, and the message broker is RabbitMQ. The main services that support Nova are.
nova-api is a service that receives REST API calls from other services and clients and responds to them.
nova-scheduler is Nova’s scheduling service. It takes requests for starting instances from the queue and selects a compute node for running a virtual machine on it. The selection of a hypervisor is based on its weight and filters. Filters can include an amount of memory, a requested availability zone, and a set of group hosts. The rules apply each time the instance is started or when migrating to another hypervisor.
nova-conductor is the proxy service between the database and the nova-compute services. It helps with horizontal scalability.
nova-compute is the main part of an IaaS system. This daemon usually runs only on compute nodes. Its role is to rule a hypervisor through its specific API. It is designed to manage pools of computer resources and can work with widely available virtualization technologies.
placement-api is a REST API stack and data model used to track resource provider inventories, usages, and different classes of resources. It was introduced in the Newton release of OpenStack.
nova-nonvncproxy and nova-spicehtml5proxy are services providing access to the instances console through remote access VNC and SPICE protocols.
Figures 7-1 and 7-2 illustrate the process of starting an instance.
In this example, two hosts are used: compute host, which acts as the hypervisor when the nova-compute service is running, and controller node, with all its management services. The following describes the workflow for starting the instance.
1.
The client (in this particular example, the client is a Horizon web client, but it can be the openstack CLI command) asks keystone-api for authentication and generates the access token.
2.
If authentication succeeds, the client sends a request for a running instance to nova-api. It is similar to the openstack server create command.
3.
Nova validates the token and receives headers with roles and permissions from keystone-api.
4.
Nova checks the database for conflicts with existing names of objects and creates a new entry for this instance in its database.
5.
nova-api sends the RPC for a scheduling instance to the nova-scheduler service.
6.
The nova-scheduler service picks up the request from the message queue.
7.
The nova-scheduler service queries resources from the placement service.
8.
Placement returns a list of available resources to nova-scheduler.
9.
nova-scheduler uses filters and weights to build a list of nodes to target. Then scheduler sends the RPC call to the nova-compute service to launch the virtual machine.
10.
The nova-compute service picks up the request from the message queue.
11.
The nova-compute service asks nova-conductor to fetch information about the instance, for example, host ID and flavor.
12.
The nova-conductor service picks up the request from the message queue.
13.
The nova-conductor service gets information about an instance from the database.
14.
The nova-compute takes the instance information from the queue. The compute host now knows which image is used to start the instance. nova-compute asks the glance-api service for a particular image URL.
15.
glance-api validates the token and returns the image’s metadata, including the URL.
16.
The nova-compute service passes a token to neutron-api and asks it to configure the network for the instance.
17.
Neutron validates the token and configures the network.
18.
nova-compute interacts with cinder-api to attach the volume to the instance.
19.
nova-compute generates data for the hypervisor and executes the request via libvirt.
Now let’s look at Nova’s main configuration file, /etc/nova/nova.conf. Table 7-1 lists the options available.
Table 7-1
Main Configuration Options in /etc/nova/nova.conf
Examples of Config Options
Description
[DEFAULT]
my_ip = 192.168.122.10
Management interface IP address of the controller node
[DEFAULT]
enabled_apis = osapi_compute,metadata
Enables support for the compute service and metadata APIs
Authentication parameters: endpoints and other parameters like default project name, domain name, project name for services, and account information for Nova user
RabbitMQ broker address, port, user name, and password
[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
Management interface IP address of the VNC proxy
[glance]
api_servers=192.168.122.10:9292
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.122.10:5000/v3
username = placement
password = password
Location of the Image Service API
Configuration for access to the Placement service
Managing Flavors
Instance flavor is a virtual machine template that describes the main parameters. It is also known as an instance type. Immediately after installing the OpenStack cloud, you can choose several predefined flavors. You can also add new flavors and delete existing ones. Use the following command to list the flavors.
As you can see in the listing, some flavors were created during OpenStack's installation. In some of distributions list will be empty out of box. To list the details of the flavor use next command. As you can see m1.tiny has one vCPU and 512 Mb of memory.
$ openstack flavor show m1.tiny
+----------------------------+---------+
| Field | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| access_project_ids | None |
| description | None |
| disk | 1 |
| id | 1 |
| name | m1.tiny |
| os-flavor-access:is_public | True |
| properties | |
| ram | 512 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------+
By default, only the admin can list all the flavors and create new ones. Here is an example of the creation of a new publicly available flavor.
In this example, a new flavor was created with the name m10.tiny that has a 3 GB disk, 400 MB RAM, and 1 vCPU. You can delete the flavor with the following command.
$ openstack flavor delete m10.tiny
For managing flavors in Horizon, go to Admin ➤ Compute ➤ Flavors (see Figure 7-3).
Managing and Accessing an Instance Using a Key Pair
Before launching instances, you should know how to work with OpenSSH key pairs. Access to virtual machines with OpenSSH key-based authentication is essential for using GNU/Linux in the cloud computing environment.
SSH (Secure Shell) allows you to authenticate users by using the private-public key pair. You should generate two linked cryptographic keys: public and private. The public key can be given to anyone. Your private key should be kept in a secure place—it is only yours. An instance running the OpenSSH server with your public key can issue a challenge that can only be answered by the system holding your private key. As a result, it can be authenticated through the presence of your key. This allows you to access a virtual machine in a way that does not require passwords.
OpenStack can store public keys and put them inside the instance when it is started. It is your responsibility to keep the private key secured. If you lose the key, you can’t recover it. In that case, you should remove the public key from your OpenStack cloud and generate a new key pair. If somebody steals a private key, they can get access to your instances.
Tip
In a GNU/Linux system, public keys are stored in the ∼/.ssh/authorized_keys file.
Let’s start by creating a key pair. The corresponding command is.
$ openstack keypair create apresskey1 > key1
You create a key pair with this command. The private key is stored in the key1 file in your workstation.
A public key is stored in your OpenStack cloud and is ready to use. You can check the list of public keys accessible to you with the following command.
Option -i points to your private key. The next section explains how to run an instance and inject a public key into it.
Launching, Shutting Down, and Terminating the Instance
If you have only one network in the project, you need at least three parameters to start an instance: the name of an instance, the flavor, and the source of an instance. The instance source can be an image, snapshot, or block storage volume. At boot time, you can specify optional parameters, like key pairs, security groups, user data files, and volume for persistent storage. There are two networks.
This example tried to run the instance with the name apressinst3 by the flavor m1.tiny from an image named cirros-0.5.2-x86_64. You also specified the security group named apress-sgroup and the keypair apresskey1. To check the current state of the instances available, use the following command.
You may want to connect to the instance console in your browser by the noVNC client, which is the VNC client using HTML5 with encryption support. To get the URL, use the following command.
If you put the URL in your browser’s address bar, you can connect to the machine, as shown in Figure 7-5.
Tip
If you got an error by using PackStack installation, check that the hostname of your OpenStack server is present in /etc/hosts.
If you prefer to work with instances in GUI, you can use the Horizon web interface. For that, go to Project ➤ Compute ➤ Instances. An example of the series of launch dialogs is shown in Figure 7-6. Walk through them by using the Next button.
If there is an error, you may see something like the following.
OpenStack can create snapshots of instances, even if a virtual machine is running. In this case, the user must keep the data consistent. It is important to know that snapshot is not an instance recovery point. Snapshot is the same as a regular Glance image. You can start a new virtual machine from the snapshot of another virtual machine.
Let’s check whether there is at least one image in Glance and one instance.
You can use Horizon to create snapshots of instances, as shown in the dialog in Figure 7-7.
Managing Quotas
A quota limits the number of available resources. The default number of resources allowed per tenant is defined in the main configuration file: /etc/nova/nova.conf. Here is an example.
[quota]
# Quota options allow to manage quotas in openstack deployment.
# The number of instances allowed per project.
# Minimum value: -1
instances=10
# The number of instance cores or vCPUs allowed per project.
# Minimum value: -1
cores=20
# The number of megabytes of instance RAM allowed per project.
# Minimum value: -1
ram=51200
# The number of metadata items allowed per instance.
# Minimum value: -1
metadata_items=128
# The number of injected files allowed.
# Minimum value: -1
injected_files=5
# The number of bytes allowed per injected file.
# Minimum value: -1
injected_file_content_bytes=10240
# The maximum allowed injected file path length.
# Minimum value: -1
injected_file_path_length=255
# The maximum number of key pairs allowed per user.
# Minimum value: -1
key_pairs=100
# The maximum number of server groups per project.
# Minimum value: -1
server_groups=10
# The maximum number of servers per server group.
# Minimum value: -1
server_group_members=10
The admin can retrieve the default number of resources allowed per project with the openstack quota listcommand. Here is an example.
Users can see a part of the current quotas in a graphical view on the project’s Overview page, as shown in Figure 7-8.
Admins can manage quotas on a per-project basis in Horizon by going to Identity ➤ Projects ➤ Modify Quotas and accessing the drop-down menu to the right of the project’s name.
As you can see in this example, all services are running on the same host. In the production environment, all are running on the control nodes except nova-compute and nova-compute, which are running on the compute nodes.
According to OpenStack documentation, although the compute and metadata APIs can be run using independent scripts that provide Eventlet-based HTTP servers, it is generally considered more performant and flexible to run them using a generic HTTP server that supports WSGI (such as Apache or Nginx). In particular PackStack example, web-server Apache is used as WSGI service. You can find Apache configs for WSGI services by looking at the /etc/httpd/conf.d/directory.
The Nova service listens for incoming connections at the 192.168.122.10 IP address and port 8774.
You may also want to check Nova’s log files. With the help of the lsof command, you can enumerate the log files and services that are using it.
# lsof /var/log/nova/*
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nova-comp 1408 nova 3w REG 253,0 763552 34858285 /var/log/nova/nova-compute.log
nova-novn 1410 nova 3w REG 253,0 16350 35522927 /var/log/nova/nova-novncproxy.log
nova-cond 4085 nova 3w REG 253,0 1603975 35839626 /var/log/nova/nova-conductor.log
nova-sche 4086 nova 3w REG 253,0 1612316 35839628 /var/log/nova/nova-scheduler.log
httpd 4209 nova 9w REG 253,0 490304 35839631 /var/log/nova/nova-api.log
...
httpd 4217 nova 9w REG 253,0 16360 34619037 /var/log/nova/nova-metadata-api.log
...
Summary
OpenStack Compute is the heart of OpenStack. You cannot avoid this topic, so this chapter should be studied thoroughly. Compute topic is quite significant since it is the main goal of OpenStack to run compute resources.
The next chapter covers OpenStack’s object storage.
Review Questions
1.
Which service acts as a proxy service between the database and nova-compute services?
A.
nova-conductor
B.
nova-nonvncproxy
C.
nova-api
D.
nova-scheduler
2.
Which adds a new flavor named m5.tiny that has a 5 GB disk, 2 vCPU, and 500 MB RAM?
A.
nova flavor-create --is-public true m5.tiny auto 500 2 5