16

Virtualization

There have been a great many advancements in the IT space in the last few decades, and a few technologies have come along that have truly revolutionized the technology industry. I’m sure few would argue that the internet is by far the most revolutionary technology to come around, but another technology that has created a paradigm shift in IT is virtualization. This concept changed the way we maintain our data centers, allowing us to segregate workloads into many smaller machines being run from a single server. This allows us to get even more use out of our hardware. Since Ubuntu features the latest advancements of the Linux kernel, virtualization support is actually built right into it. After installing just a few packages to allow us to interact with the virtualization features, we can create virtual machines on our Ubuntu server without the need for a pricey license agreement or support contract. In this chapter, I’ll walk you through setting up your own Ubuntu-based virtualization solution. Along the way, I’ll walk you through the following topics:

  • Prerequisites and considerations
  • Setting up a virtual machine server
  • Creating virtual machines
  • Bridging the virtual machine network
  • Simplifying virtual machine creation with cloning
  • Managing virtual machines via the command line

In order to get started, we’ll need a server to use for this task, and we’ll first have a discussion on some considerations to make when setting up a server for this purpose.

Prerequisites and considerations

I’m sure many of you have already used a virtualization solution before. In fact, I bet a great many readers are following along with this book while using a Virtual Machine (VM) running in a solution such as VirtualBox, Parallels, VMware, or one of the others. Those applications and others like them are great for testing Ubuntu or other operating systems on your desktop or laptop. In this section, we’ll set up a VM server that can act as a centrally available server on which to run VMs.

This will be easier than you may think—Ubuntu has virtualization built right in. This comes in the form of a dynamic duo consisting of Kernel-based VM (KVM) and Quick Emulator (QEMU), which together form a virtualization suite that enables Ubuntu (and Linux in general) to run VMs without the need for a third-party solution. KVM is the feature that is built right into the Linux kernel that performs the magic under the hood. It handles the low-level instructions in the kernel that are needed to separate tasks between those run on a physical host and on a guest VM. QEMU is also important, as it emulates hardware components that are generally found in physical servers. The combination of KVM and QEMU makes up the virtualization solution that can be enabled on an Ubuntu server to turn it into a host for VMs.

To be fair, you can set up something like VirtualBox on your Ubuntu Server to accomplish the same thing, and end up with a centrally available virtualization server. And that’s perfectly valid, there’s certainly nothing wrong with running VirtualBox this way, and many people do. But there are improvements to be had by utilizing a built-in system, and KVM offers a very fast interface to the Linux kernel to run your VMs with near-native speeds, depending on your use case. QEMU/KVM (which I’ll refer to simply as KVM going forward) is about as native as you can get.

I bet you’re eager to get started, but there are a few quick things to consider before we dive in. First, of all the activities I’ve walked you through in this book so far, setting up our own virtualization solution will be the most expensive from a hardware perspective. The more VMs you plan on running, the more resources your server will need to have available (especially RAM). Thankfully, most computers nowadays ship with 8 GB of RAM at a minimum, with 16 GB or more being fairly common. With most modern computers, you should be able to run VMs without too much of an impact. Depending on what kind of machine you’re using, the CPU and RAM may present bottlenecks, especially when it comes to legacy hardware.

For the purposes of this chapter, it’s recommended that you have a PC or server available with a processor that’s capable of supporting VM extensions. A good majority of CPUs on computers nowadays offer this, though some may not. To be sure, you can run the following command on the machine you intend to host the KVM VMs on in order to find out whether your CPU supports virtualization extensions:

egrep -c '(vmx|svm)' /proc/cpuinfo 

A result of 1 or more means that your CPU does support virtualization extensions. A result of 0 means it does not:

Figure 16.1: Checking the CPU for compatibility with virtualization

Even if your CPU does support virtualization extensions, it’s often the case that it’s disabled by default with most end user PCs sold today, and even some servers. To enable these extensions, you may need to enter the BIOS setup screen for your computer and enable the option. Depending on your CPU and chipset, this option may be named something similar to “virtualization support,” under a more technical name such as VT-x, AMD-V, or another verbiage. Unfortunately, I won’t be able to walk you through how to enable the virtualization extensions for your hardware, since the instructions will differ from one machine to another. If in doubt, refer to the documentation for your hardware.

One final note: I’m sure many of you are using VirtualBox, as it seems to be a very popular solution for those testing out Linux distributions (and rightfully so; it’s great!). However, you can’t run both VirtualBox and KVM VMs on the same machine simultaneously. You can certainly have both solutions installed on the same machine, but you just can’t have a VirtualBox VM up and running, and then expect to also be able to start up a KVM VM. The virtualization extensions of your CPU can only work with one solution at a time.

Another consideration to bear in mind is the amount of space the server has available, as VMs can take quite a bit of space. The default directory for KVM VM images is /var/lib/libvirt/images. If your /var directory is part of the root filesystem, you may not have a lot of space to work with here. One trick is that you can mount an external storage volume to this directory, so you can store your VM disk images on another volume. Or you can simply create a symbolic link that will point this directory somewhere else. We discussed symbolic links in Chapter 5, Managing Files and Directories. The choice is yours. If your root filesystem has at least 10 GB available, you should be able to create at least one VM without needing to configure the storage. I think it’s a fair estimate to assume at least 10 GB of hard drive space per VM.

Setting up a virtual machine server

With all the discussion out of the way, let’s start the process and set up our virtualization server. Even though KVM is built into the Linux kernel, we’ll still need to install some packages in order to properly interface with it. Specifically, we’ll need to install several libvirt packages, as well as QEMU itself. libvirt itself gives us access to manage virtualization platforms on our server, as it provides us with a set of useful tools to manage our virtual machines.

These packages will require a decent number of dependencies, so it may take a few minutes for everything to install:

sudo apt install bridge-utils libvirt-clients libvirt-daemon-system qemu-system-x86

You’ll now have an additional service running on your server, libvirtd. Once you’ve finished installing KVM’s packages, this service will be started and enabled for you. Feel free to take a look at it to see for yourself:

systemctl status libvirtd 

You should see information on the state of the service, similar to the following:

Figure 16.2: Checking the status of the libvirtd unit after installing KVM-related packages

Let’s stop this service for now, as we have some additional configuration to do:

sudo systemctl stop libvirtd

Next, we’ll need to make sure we have two required groups on our server, kvm and libvirt. It’s quite possible that the packages that we’ve installed have added these groups on our server already, so feel free to check the contents of /etc/group and see if they’re there. If not, you can create them with the groupadd command:

sudo groupadd kvm
sudo groupadd libvirt

Our primary user account should be a member of both groups. If your user isn’t already a member of these, add your user to the required groups (substitute the username, jay, with yours):

sudo usermod -aG kvm jay
sudo usermod -aG libvirt jay

At this point, you may as well log out and then log in again to ensure the changes to your group memberships have taken effect.

To ensure we’ll be able to manage virtualization properly, we should ensure that users of the kvm group have access to the /var/lib/libvirt/images directory so that they’ll have access to the data that will be stored in the directory. First, we’ll apply the kvm group to this folder:

sudo chown :kvm /var/lib/libvirt/images 

Then, we’ll set the permissions of /var/lib/libvirt/images such that anyone in the kvm group will be able to modify its contents:

sudo chmod g+rw /var/lib/libvirt/images 

With the initial packages and permissions in place, we can now start the libvirtd service:

sudo systemctl start libvirtd 

Next, check the status of the service to make sure that there are no errors:

sudo systemctl status libvirtd 

Now that we’ve configured the server, we can set up our workstation to be able to connect to it and manage the virtualization implementation that we’ve set up. We’ll install a utility that will give us a graphical user interface (GUI) through which we can perform administration tasks relating to VMs. The utility we’ll be using for this purpose is known as Virtual Machine Manager abbreviated as virt-manager. This utility is installed on Linux workstations, so you’ll need to install it on a laptop or desktop that’s running a desktop variant of Linux. If you have a computer running Debian or Ubuntu, the following command will install the packages that are required for this:

sudo apt install ssh-askpass virt-manager 

If you use a distribution of Linux other than Ubuntu, Debian, or one based on them, then you may need to consult the documentation for your distribution in order to install virt-manager. If you’re not running Linux on your workstation at all, there is a suite of command-line utilities that can be used to manage VMs that we’ll cover later in this chapter when we discuss this in the Managing virtual machines via the command line section. If all else fails, you can install this utility inside a Linux VM running on your workstation.

Next, open virt-manager on your administration machine. It should be located in the Applications menu of your desktop environment, usually under the System Tools section of Virtual Machine Manager. If you have trouble finding it, simply run virt-manager in your shell prompt. When you first launch it, you may see the following error:

Figure 16.3: A possible error that may appear when first launching virt-manager

If you do see the error, simply dismiss it and don’t worry about it. By default, virt-manager defaults to attempting to connect to an instance of libvirtd running on your local computer. Unless you are also running KVM VMs locally and you’ve already set it up, this attempt will fail. But that doesn’t matter for us, as we’ll be using virt-manager to manage VMs on our server.

Once you’ve opened virt-manager, you’ll see the main window, which will look similar to the following:

Figure 16.4: The virt-manager application

The virt-manager utility is especially useful as it allows us to manage both remote and local KVM servers. From one utility, you can create connections to any of your KVM servers, including one or more external servers or localhost if you are running KVM on your laptop or desktop. To create a new connection, click on File and select Add Connection. A new screen will appear, where we can fill out the details of the KVM server we wish to connect to:

Figure 16.5: Adding a new connection to virt-manager

In the Add Connection window, enter the details of your connection. In the screenshot, you can see that I first checked the Connect to remote host over SSH box, which selects SSH as my connection method, jay for my Username, and I’ve entered the IP address of my KVM server (172.16.250.19) in the Hostname field. Fill out the specific details here for your KVM server to set up your connection. Keep in mind that in order for this to work, the username you include here will need to be able to access the server via SSH and have permissions to the hypervisor (be a member of the kvm and libvirtd groups we added earlier), and the libvirtd service must be running on the server. If all of these requirements are met, you’ll have a new connection set up to your KVM server when you click Connect. You might see a pop-up dialog box with the text Are you sure you wish to continue connecting (yes/no)?. If you do, type yes and press Enter.

Either way, you should be prompted for your password to your KVM server; type that in and press Enter. You should now have a connection listed in your virt-manager application. You can see the connection I added in the following screenshot; it’s the second one on the list. The first connection is localhost since I also have KVM running on my local laptop in addition to having it installed on a remote server:

Figure 16.6: virt-manager with a new connection added

We’re almost at a point where we’ll be able to test our KVM server. But first, we’ll need a storage group for ISO images, for use when installing operating systems on our VMs. When we create a VM, we can attach an ISO image from our ISO storage group to our VM, which will allow it to install the operating system.

To create this storage group, open virt-manager if it’s not open already. Right-click on the listing for your server connection, and then click on Details. You’ll see a new window that will show details regarding your KVM server. Click on the Storage tab:

Figure 16.7: The first screen while setting up a new storage pool

At first, you’ll only see the default connection we edited earlier. Now, we can add our ISO storage pool. Click on the plus symbol in the bottom-left corner to create the new pool:

Figure 16.8: The storage tab of the virt-manager application

In the Name field, type ISO. You can actually name it anything you want, but ISO makes sense, considering it will be storing ISO images. For the Target Path field, set it to /var/lib/libvirt/images/ISO unless you have a different directory in your filesystem for VM storage. Click Finish to finalize our changes. We should also update the permissions for this directory so that it’s owned by the proper user, and members of the kvm group have read and write access to it:

sudo chown root:kvm /var/lib/libvirt/images/ISO
sudo chmod g+rw /var/lib/libvirt/images/ISO

Congratulations! You now have a fully configured KVM server for creating and managing VMs. Our server has a place to store VMs as well as ISO images. You should also be able to connect to this instance using virt-manager, as we’ve done in this section. Next, I’ll walk you through the process of setting up your first VM. Before we get to that, I recommend you copy some ISO images over to your KVM server. It doesn’t really matter which ISO image you use—any operating system should suffice. If in doubt, you can simply download Ubuntu Server 22.04 again like we did back in Chapter 1, Deploying Ubuntu Server, when we set up our initial installation.

After you’ve chosen an ISO file and you’ve downloaded it, copy it over to your server via scp or rsync, and move it into the /var/lib/libvirt/images/ISO directory. Both of those utilities were covered in Chapter 12, Sharing and Transferring Files. Once the file has been copied over, you should have everything you need for now.

Creating virtual machines

Now, the time has come to put your new VM server to the test and create a VM. At this point, I’m assuming that the following is true:

  • You’re able to connect to your KVM server via virt-manager
  • You’ve already copied one or more ISO images to the server
  • Your storage directory has at least 10 GB of space available
  • The KVM server has enough free RAM to be associated with the VM you intend on creating

Go ahead and open up virt-manager, and let’s get started!

In virt-manager, right-click your server connection and click on New to start the process of creating a new VM. The default selection will be on Local install media (ISO image or CDROM); leave this selection as is and click on Forward:

Figure 16.9: The first screen while setting up a new VM

On the next screen, click on Browse to open up another window where you can select the ISO image you’ve downloaded:

Figure 16.10: Creating a new VM and setting the VM options

If you click on your ISO storage pool, you should see a list of ISO images you’ve downloaded:

Figure 16.11: Choosing an ISO image during VM creation

If you don’t see any ISO images here, you may need to click the refresh icon. In my sample server, I added an install image for Ubuntu Server 22.04, which you can see in the list. Again, you can use whatever operating system you prefer. Click on the ISO image name to highlight it, and then click Choose Volume to finalize the selection. Then, click Forward to continue to the next screen.

Next, you’ll be asked to allocate RAM and CPU resources to the VM:

Figure 16.12: Adjusting the RAM and CPU count for the new VM

For most Linux distributions with no GUI, 2,048 MB is plenty (unless your workload demands more). One CPU core is fine for lightweight workloads, but consider adding more if the documentation for the application you intend on running recommends more than that. The resources you select here will depend on what you have available on your host. Click on Forward when you’ve finished allocating resources.

Next, you’ll allocate free disk space for your VM’s virtual hard disk:

Figure 16.13: Allocating storage resources for the new VM

Set the disk image size to however much space you feel is relevant for the purpose of the VM. Click on Forward when done.

Finally, you’ll name your VM:

Figure 16.14: Naming the new VM

This won’t be the hostname of the VM; it’s just the name you’ll see when you see the VM listed in virt-manager. When you click on Finish, the VM will start and it will automatically boot into the install ISO you’ve attached to the VM near the beginning of the process. The installation process for that operating system will then begin:

Figure 16.15: Installing Ubuntu Server inside a VM

When you click on the VM window, it will steal your keyboard and mouse and dedicate them to the window. Press Ctrl and Alt at the same time to release this control and regain full control of your keyboard and mouse.

Unfortunately, I can’t walk you through the installation process of your VM’s operating system since there are hundreds of possible candidates you may be installing. If you’re installing another instance of Ubuntu Server, you can refer back to Chapter 1, Deploying Ubuntu Server, where we walked through the process. The process will be the same in the VM. From here, you should be able to create as many VMs as you need and have resources for.

Next, we’ll look at some concepts surrounding networking for our VMs.

Bridging the virtual machine network

Your KVM VMs will use their own network unless you configure bridged networking. This means your VMs will get an IP address in their own network, instead of yours. By default, each machine will be a member of the 192.168.122.0/24 network, with an IP address in the range of 192.168.122.2 to 192.168.122.254. If you’re utilizing KVM VMs on your personal laptop or desktop, this behavior might be adequate. You’ll be able to SSH into your VMs via their IP addresses if you’re connecting from the same machine the VMs are running on. If this satisfies your use case, there’s no further configuration you’ll need to do.

Bridged networking allows your VMs to receive an IP address from the DHCP server on your network instead of its internal one, which will allow you to communicate with your VMs from any other machine on your network. This use case is preferable if you’re setting up a central VM server to power infrastructure for your small office or organization, as your DHCP server can become a single source of truth for all of the IP addresses in use in your organization. With a bridged network on your VM server, each VM will be treated as any other network device. All you’ll need is a wired network interface, as wireless cards typically don’t work with bridged networking.

That last point is very important. Some network cards don’t support bridging, and if yours doesn’t, you won’t be able to use a bridge with your VM server unless you replace the network card. Before continuing, you may want to ensure your network card supports bridging by reading the documentation from the vendor of your device. In my experience, most wired cards made by Intel support bridging, and most wireless cards do NOT. Make sure you back up the Netplan configuration file before changing it, so you can revert back to the original version if you find that bridging doesn’t work for you.

To set up bridged networking, we’ll need to create a new interface on our server (the one that’s intended for hosting virtual machines). Open up the /etc/netplan/00-installer-config.yaml file in your text editor with sudo. We already talked about this file in Chapter 10, Connecting to Networks, so I won’t go into too much detail about it here. Basically, this file includes the configuration for each of our network interfaces, and this is where we’ll add our new bridged interface.

Make sure you make a backup of the original Netplan configuration file, and then replace its contents with the following. Be sure to replace enp0s3 (the interface name) with your actual wired interface name if it’s different. There are two occurrences of it in the file.

If you’re reading the digital version of this book, it’s highly recommended that you refrain from copying and pasting the following code, but rather type it manually or copy it from the GitHub URL for the book’s code bundle. The reason is that the YAML format is extremely picky about spaces, and if you end up with a mix of spaces and tabs, the file might not work. When Netplan errors, it can be very hard to figure out exactly what it’s complaining about, but spacing is quite often the culprit even if the error output doesn’t lead you to believe so.

Take your time while configuring this file. If you make a single mistake, you will likely not have network access to the machine once it restarts:

network:
  ethernets:
    enp0s3:
      dhcp4: false
  bridges: 
    br0: 
      interfaces: [enp0s3] 
      dhcp4: true 
      parameters: 
        stp: false 
        forward-delay: 0 

After you make the change, you can apply the new settings immediately, or simply reboot the server. If you have a monitor and keyboard hooked up to the server, the following command is the easiest way to activate the new configuration:

sudo netplan apply 

If you’re connected to the server via SSH, restarting the network configuration will likely result in the server becoming inaccessible because the SSH connection will likely drop as soon as the network stops. This will disrupt the connection and prevent networking from starting back up. If you know how to use screen or tmux, you can run the restart command from within either; otherwise, it may just be simpler for you to reboot the server.

After networking restarts or the server reboots, check whether you can still access network resources, such as pinging websites and accessing other network nodes from it. If you can, you’re all set. If you’re having any trouble, make sure you edited the Netplan config file properly.

Now, you should see an additional network interface listed when you run ip addr show. The interface will be called br0. The br0 interface should have an IP address from your DHCP server, in place of your enp0s3 interface (or whatever it may be named on your system). From this point forward, you’ll be able to use br0 for your VM’s networking, instead of the internal network. The internal KVM network will still be available, but you can select br0 to be used instead when you create new VMs.

If you have a VM you’ve already created that you’d like to switch to utilize your bridged networking, you can use the following steps to convert it:

  1. First, open virt-manager and double-click on your VM. A new window with a graphical console of your VM will open.
  2. The second button along the top (which appears as a blue circle) will open the Virtual Hardware Details tab, which will allow you to configure many different settings for the VM, such as the CPU count, the RAM amount, the boot device order, and more.
  3. Among the options on the left-hand side of the screen, there will be one that reads NIC and shows part of the VM’s network card’s MAC address. If you click on this, you can configure the VM to use your new bridge by selecting it in the list.
  4. Finally, click on Apply. You may have to restart the VM for the changes to take effect:

Figure 16.16: Configuring a VM to use bridge br0

While creating a brand-new VM, there’s an additional step you’ll need to do in order to configure the VM to use bridged networking. In the last step of the process, where you set a name for the VM (as shown in Figure 16.14), you’ll also see Advanced options listed near the bottom of the window. Expand this, and you’ll be able to set your network name. Change the dropdown in this section to Specify shared device name and set the bridge Name to br0. Now, you can click on Finish to finalize the VM as before, and it should use your bridge whenever it starts up.

From this point onward, you should have not only a fully configured KVM server or instance but also a solution that can be treated as a full citizen of your network. Your VMs will be able to receive an IP address from a DHCP server and communicate with other network nodes directly. If you have a very beefy KVM server, you may even be able to consolidate other network appliances into VMs to save space, which is basically the entire purpose of virtualization.

In the next section, we’ll simplify the process a bit by discussing the creation of a template that can be used to act as a preconfigured starting point when setting up a new VM.

Simplifying virtual machine creation with cloning

Now that we have a KVM server, and we can spin up an army of VMs to do our bidding, we can try and find clever ways of automating some of the workload of setting up a new VM. Every time we go to create a new VM, we need to go through the entire installation process for its operating system again. While this process is not difficult, we can certainly simplify it.

Most prominent virtualization solutions include a feature that allows you to create a VM Template. With a template, we can create a VM once and get it completely configured. Then, we can convert it into a template and use it as a base for all future VMs that will use that same operating system. This saves a tremendous amount of time. You’ll probably recall the handful of screens you had to navigate through to install Ubuntu Server in our first chapter. Imagine not having to go through that process again (or at least not nearly as often).

Unfortunately, as great as QEMU/KVM is, it doesn’t have a template feature. This glaring hole in its feature set is a sizable setback, but thankfully we Linux administrators are very clever, and we can easily work around this to create a solution that’s essentially the same thing as templates.

Take the following screenshot, for example:

Figure 16.17: Virtual Machine Manager with a template listed

In the screenshot, you can see two VMs, ubuntu22.04 and ubuntu-server-template. Although its name would lead you to believe otherwise, the latter is not a template at all; it’s just a VM. There’s nothing really different about it, aside from the fact that it isn’t running. What it is, though, is a clever workaround (if I do say so myself). If I want to create a new VM, I simply right-click on it, then click Clone.

The following window will appear:

Figure 16.18: Cloning a VM

When I click Clone in this window, after giving the new VM a name, I’ve made a copy of it to serve as my new VM. It will use the original as a base, which I’ve already configured. Since Ubuntu Server was installed on the “template,” I don’t need to do all that work again.

If you create virtual machine templates for production use, it’s highly recommended that you check out cloud-init, which can help generalize its Ubuntu installation, which includes regenerating its SSH host keys and the machine ID. cloud-init is beyond the scope of this book but is definitely essential if you want to go even deeper into the topic of generating virtual machine templates.

Think about the tasks that you find yourself doing manually after setting up a new Ubuntu Server instance. With a base VM being used as if it were a template, you can include any tweaks or customizations you find yourself implementing right into that VM, so every time you clone it, all that work is done for you automatically. So long as you maintain your base VM, you can spin up as many VMs from it as you need and be able to do so with minimal configuration steps.

We’ve used virt-manager quite a bit in this chapter to customize our VMs, and while it’s a great utility, we should also understand how to manage our infrastructure without it. In the next section, we’ll take a look at some command-line examples of managing VMs.

Managing virtual machines via the command line

In this chapter, I showed you how to manage VMs with virt-manager. This is great if you have a secondary machine with a GUI running Linux as its operating system. But what do you do if such a machine isn’t available, and you’d like to perform simple tasks such as rebooting a VM or checking to see which VMs are running on the server?

On the VM server itself, you have access to the virsh suite of commands, which will allow you to manage VMs even if a GUI isn’t available. To use these commands, simply connect to the machine that stores your VMs via SSH. What follows are some easy examples to get you started. Here’s the first one:

virsh list

This command will return an output like that shown in the following screenshot:

Figure 16.19: Showing running VMs with the virsh list command

With one command, we were able to list the VMs running on the server. In the example screenshot, you can see that I have a single VM running. If you’d also like to see non-running instances, simply add the --all option to the command.

We can manage the state of our VMs with any of the following commands:

  • virsh start vm-name
  • virsh shutdown vm-name
  • virsh suspend vm-name
  • virsh resume vm-name
  • virsh destroy vm-name
  • virsh undefine vm-name

The command syntax for virsh is extremely straightforward. By looking at the previous list of commands, you should be able to glean exactly what they do. The virsh commands allow us to do things such as start, shutdown, suspend, and resume a VM. The virsh destroy command is potentially destructive, as we’d use it when we want to halt a VM abruptly. It’s essentially the same result as pulling a power cable from a physical server; it stops the instance immediately. You should only run that command when you are dealing with an unresponsive VM. Finally, the virsh undefine command deletes a VM, but you’ll have to remove any associated disk files with the rm command. The default directory for disk files is /var/lib/libvirt/images, so you can look inside that directory for any disk files that belong to the VM you’ve deleted (they will be named the same as the VM).

That’s not all virsh can do, however. We can actually create a VM with the virsh suite of commands as well. Learning how to do so is a good idea if you don’t use Linux as your workstation operating system, or you don’t have access to virt-manager for some reason. However, manually creating VM disk images and configuration is outside the scope of this chapter. The main goal is for you to familiarize yourself with managing VMs via virsh, and these simple basics will allow you to expand your knowledge further.

Summary

In this chapter, we took a look at virtualization, specifically with QEMU/KVM. We walked through the installation of KVM and the configuration required to get our virtualization server up and running. We walked through the process of creating a bridged network so that our VMs can be accessible from the rest of the network and created our first VM. In addition, although QEMU/KVM doesn’t have its own solution for templating, we worked around that and created our own solution.

In Chapter 17, Running Containers, we’ll take a look at containerization, which will include both Docker and LXD. Stay tuned!

Relevant video

Further reading

Join our community on Discord

Join our community’s Discord space for discussions with the author and other readers:

https://packt.link/LWaZ0

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.151.159