© David Both 2020
D. BothUsing and Administering Linux: Volume 1https://doi.org/10.1007/978-1-4842-5049-5_16

16. Linux Boot and Startup

David Both1 
(1)
Raleigh, NC, USA
 

Objectives

In this chapter you will learn
  • The difference between Linux boot and startup

  • What happens during the hardware boot sequence

  • What happens during the Linux boot sequence

  • What happens during the Linux startup sequence

  • How to manage and modify the Linux boot and startup sequences

  • The function of the display and window managers

  • How the login process works for both virtual consoles and a GUI

  • What happens when a user logs off

This chapter explores the hardware boot sequence, the bootup sequence using the GRUB2 bootloader, and the startup sequence as performed by the systemd initialization system. It covers in detail the sequence of events required to change the state of the computer from off to fully up and running with a user logged in.

This chapter is about modern Linux distributions like Fedora and other Red Hat–based distributions that use systemd for startup, shutdown, and system management. systemd is the modern replacement for init and SystemV init scripts.

Overview

The complete process that takes a Linux host from an off state to a running state is complex, but it is open and knowable. Before we get into the details, a quick overview of the time the host hardware is turned on until the system is ready for a user to log in will help orient us. Most of the time we hear about “the boot process” as a single entity, but it is not. There are, in fact, three parts to the complete boot and startup process:
  • Hardware boot which initializes the system hardware

  • Linux boot which loads the Linux kernel and systemd

  • Linux startup in which systemd makes the host ready for productive work

It is important to separate the hardware boot from the Linux boot process from the Linux startup and to explicitly define the demarcation points between them. Understanding these differences and what part each plays in getting a Linux system to a state where it can be productive makes it possible to manage these processes and to better determine the portion in which a problem is occurring during what most people refer to as “boot.”

Hardware boot

The first step of the Linux boot process really has nothing whatever to do with Linux. This is the hardware portion of the boot process and is the same for any Intel-based operating system.

When power is first applied to the computer, or the VM we have created for this course, it runs the power-on self-test (POST)1 which is part of BIOS2 or the much newer Unified Extensible Firmware Interface3 (UEFI). BIOS stands for Basic I/O System, and POST stands for power-on self-test. When IBM designed the first PC back in 1981, BIOS was designed to initialize the hardware components. POST is the part of BIOS whose task is to ensure that the computer hardware functioned correctly. If POST fails, the computer may not be usable, and so the boot process does not continue.

Most modern motherboards provide the newer UEFI as a replacement for BIOS. Many motherboards also provide legacy BIOS support. Both BIOS and UEFI perform the same functions – hardware verification and initialization, and loading the boot loader. The VM we created for this course uses a BIOS interface which is perfectly fine for our purposes.

BIOS/UEFI POST checks basic operability of the hardware. Then it locates the boot sectors on all attached bootable devices including rotating or SSD hard drives, DVD or CD-ROM, or bootable USB memory sticks like the live USB device we used to install the StudentVM1 virtual machine. The first boot sector it finds that contains a valid master boot record (MBR)4 is loaded into RAM, and control is then transferred to the RAM copy of the boot sector.

The BIOS/UEFI user interface can be used to configure the system hardware for things like overclocking, specifying CPU cores as active or inactive, specific devices from which the system might boot, and the sequence in which those devices are to be searched for a bootable boot sector. I do not create or boot from bootable CD or DVD devices any more. I only use bootable USB thumb drives to boot from external, removable devices.

Because I sometimes do boot from an external USB drive – or in the case of a VM, a bootable ISO image like that of the live USB device – I always configure my systems to boot first from the external USB device and then from the appropriate internal disk drive. This is not considered secure in most commercial environments, but then I do a lot of boots to external USB drives. If they steal the whole computer or if it is destroyed in a natural disaster, I can revert to backups5 I keep in my safe deposit box.

In most environments you will want to be more secure and set the host to boot from the internal boot device only. Use a BIOS password to prevent unauthorized users from accessing BIOS to change the default boot sequence.

Hardware boot ends when the boot sector assumes control of the system.

Linux boot

The boot sector that is loaded by BIOS is really stage 1 of the GRUB6 boot loader. The Linux boot process itself is composed of multiple stages of GRUB. We consider each stage in this section.

GRUB

GRUB2 is the newest version of the GRUB bootloader and is used much more frequently these days. We will not cover GRUB1 or LILO in this course because they are much older than GRUB2.

Because it is easier to write and say GRUB than GRUB2, I will use the term GRUB in this chapter, but I will be referring to GRUB2 unless specified otherwise. GRUB2 stands for “GRand Unified Bootloader, version 2,” and it is now the standard bootloader for most current Linux distributions. GRUB is the program which makes the computer just smart enough to find the operating system kernel and load it into memory, but it takes three stages of GRUB to do this. Wikipedia has an excellent article on GNU GRUB.7

GRUB has been designed to be compatible with the multiboot specification which allows GRUB to boot many versions of Linux and other free operating systems. It can also chain load the boot record of proprietary operating systems. GRUB can allow the user to choose to boot from among several different kernels for your Linux distribution if there are more than one present due to system updates. This affords the ability to boot to a previous kernel version if an updated one fails somehow or is incompatible with an important piece of software. GRUB can be configured using the /boot/grub/grub.conf file.

GRUB1 is now considered to be legacy and has been replaced in most modern distributions with GRUB2, which is a complete rewrite of GRUB1. Red Hat-based distros upgraded to GRUB2 around Fedora 15 and CentOS/RHEL 7. GRUB2 provides the same boot functionality as GRUB1, but GRUB2 also provides a mainframe-like command-based pre-OS environment and allows more flexibility during the pre-boot phase.

The primary function of GRUB is to get the Linux kernel loaded into memory and running. The use of GRUB2 commands within the pre-OS environment is outside the scope of this chapter. Although GRUB does not officially use the stage terminology for its three stages, it is convenient to refer to them in that way, so I will.

GRUB stage 1

As mentioned in the BIOS/UEFI POST section, at the end of POST, BIOS/UEFI searches the attached disks for a boot record, which is located in the master boot record (MBR); it loads the first one it finds into memory and then starts execution of the boot record. The bootstrap code, that is GRUB stage 1, is very small because it must fit into the first 512-byte sector on the hard drive along with the partition table.8 The total amount of space allocated for the actual bootstrap code in a classic, generic MBR is 446 bytes. The 446-byte file for stage 1 is named boot.img and does not contain the partition table. The partition table is created when the device is partitioned and is overlaid onto the boot record starting at byte 447.

In UEFI systems, the partition table has been moved out of the MBR and into the space immediately following the MBR. This provides more space for defining partitions, so it allows a larger number of partitions to be created.

Because the boot record must be so small, it is also not very smart and does not understand filesystem structures such as EXT4. Therefore the sole purpose of stage 1 is to load GRUB stage 1.5. In order to accomplish this, stage 1.5 of GRUB must be located in the space between the boot record and the UEFI partition data and the first partition on the drive. After loading GRUB stage 1.5 into RAM, stage 1 turns control over to stage 1.5.

Experiment 16-1

Log in to a terminal session as root if there is not one already available. As root in a terminal session, run the following command to verify the identity of the boot drive on your VM. It should be the same drive as the boot partition:
[root@studentvm1 ~]# lsblk -i
NAME                                 MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                          8:0    0   60G  0 disk
|-sda1                       8:1    0    1G  0 part /boot
`-sda2                       8:2    0   59G  0 part
  |-fedora_studentvm1-root 253:0    0    2G  0 lvm  /
  |-fedora_studentvm1-swap 253:1    0    6G  0 lvm  [SWAP]
  |-fedora_studentvm1-usr  253:2    0   15G  0 lvm  /usr
  |-fedora_studentvm1-home 253:3    0    4G  0 lvm  /home
  |-fedora_studentvm1-var  253:4    0   10G  0 lvm  /var
  `-fedora_studentvm1-tmp  253:5    0    5G  0 lvm  /tmp
[root@studentvm1 ~]#

Use the dd command to view the boot record of the boot drive. For this experiment I assume it is assigned to the /dev/sda device. The bs= argument in the command specifies the block size, and the count= argument specifies the number of blocks to dump to STDIO. The if= argument (InFile) specifies the source of the data stream, in this case, the USB device:

../images/473415_1_En_16_Chapter/473415_1_En_16_Figa_HTML.png
This prints the text of the boot record, which is the first block on the disk – any disk. In this case, there is information about the filesystem and, although it is unreadable because it is stored in binary format, the partition table. Stage 1 of GRUB or some other boot loader is located in this sector but that, too, is mostly unreadable by us mere humans. We can see a couple messages in ASCII text that are stored in the boot record. It might be easier to read these messages if we do this a bit differently. The od command (octal display) displays the data stream piped to it in octal format in a nice matrix that makes the content a bit easier to read. The -a option tells the command to convert into readable ASCII format characters where possible. The last – at the end of the command tells od to take input from the STDIN stream rather than a file:
[root@studentvm1 ~]# dd if=/dev/sda bs=512 count=1 | od -a -
1+0 records in
1+0 records out
0000000   k   c dle dle  so   P   < nul   0   8 nul nul  so   X  so   @
0000020   {   > nul   |   ? nul ack   9 nul stx   s   $   j   ! ack nul
0000040 nul   >   > bel   8 eot   u  vt etx   F dle soh   ~   ~ bel   u
0000060   s   k syn   4 stx   0 soh   ; nul   |   2 nul  nl   t soh  vt
0000100   L stx   M dc3   j nul   | nul nul   k   ~ nul nul nul nul nul
0000120 nul nul nul nul nul nul nul nul nul nul nul nul soh nul nul nul
0000140 nul nul nul nul del   z dle dle   v   B nul   t enq   v   B   p
0000160   t stx   2 nul   j   y   | nul nul   1   @  so   X  so   P   <
0000200 nul  sp   {  sp   d   |   < del   t stx  bs   B   R   > enq   |
0000220   1   @  ht   D eot   @  bs   D del  ht   D stx   G eot dle nul
0000240   f  vt  rs      |   f  ht     bs   f  vt  rs   `   |   f  ht
0000260     ff   G   D ack nul   p   4   B   M dc3   r enq   ; nul   p
0000300   k stx   k   K   `  rs   9 nul soh  so   [   1   v   ? nul nul
0000320  so   F   |   s   %  us   a   `   8 nul   ;   M sub   f enq   @
0000340   u  gs   8 bel   ;   ? nul nul   f   1   v   f   ;   T   C   P
0000360   A   f   9 nul stx nul nul   f   :  bs nul nul nul   M sub   a
0000400 del   &   Z   |   >  us   }   k etx   >   .   }   h   4 nul   >
0000420   3   }   h   . nul   M can   k   ~   G   R   U   B  sp nul   G
0000440   e   o   m nul   H   a   r   d  sp   D   i   s   k nul   R   e
0000460   a   d nul  sp   E   r   r   o   r  cr  nl nul   ; soh nul   4
0000500  so   M dle   ,   < nul   u   t   C nul nul nul nul nul nul nul
0000520 nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul
*
0000660 nul nul nul nul nul nul nul nul      ;   ^   . nul nul nul eot
0000700 soh eot etx   ~   B del nul  bs nul nul nul nul  sp nul nul   ~
0000720   B del  so   ~   B del nul  bs  sp nul nul   x   _ bel nul nul
0000740 nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul
0000760 nul nul nul nul nul nul nul nul nul nul nul nul nul nul   U   *
0001000

Note the star (*) (splat/asterisk) between addresses 0000520 and 0000660. This indicates that all of the data in that range is the same as the last line before it, 0000520, which is all null characters. This saves space in the output stream. The addresses are in octal, which is base 8.

A generic boot record that does not contain a partition table is located in the /boot/grub2/i386-pc directory. Let’s look at the content of that file. It would not be necessary to specify the block size and the count if we used dd because we are looking at a file that already has a limited length. We can also use od directly and specify the file name rather than using the dd command, although we could do that, too.

Note In Fedora 30 and above, the boot.img files are located in the /usr/lib/grub/i386-pc/ directory. Be sure to use that location when performing the next part of this experiment.
[root@studentvm1 ~]# od -a /boot/grub2/i386-pc/boot.img
0000000   k   c dle nul nul nul nul nul nul nul nul nul nul nul nul nul
0000020 nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul
*
0000120 nul nul nul nul nul nul nul nul nul nul nul nul soh nul nul nul
0000140 nul nul nul nul del   z   k enq   v   B nul   t enq   v   B   p
0000160   t stx   2 nul   j   y   | nul nul   1   @  so   X  so   P   <
0000200 nul  sp   {  sp   d   |   < del   t stx  bs   B   R   > enq   |
0000220   1   @  ht   D eot   @  bs   D del  ht   D stx   G eot dle nul
0000240   f  vt  rs      |   f  ht     bs   f  vt  rs   `   |   f  ht
0000260     ff   G   D ack nul   p   4   B   M dc3   r enq   ; nul   p
0000300   k stx   k   K   `  rs   9 nul soh  so   [   1   v   ? nul nul
0000320  so   F   |   s   %  us   a   `   8 nul   ;   M sub   f enq   @
0000340   u  gs   8 bel   ;   ? nul nul   f   1   v   f   ;   T   C   P
0000360   A   f   9 nul stx nul nul   f   :  bs nul nul nul   M sub   a
0000400 del   &   Z   |   >  us   }   k etx   >   .   }   h   4 nul   >
0000420   3   }   h   . nul   M can   k   ~   G   R   U   B  sp nul   G
0000440   e   o   m nul   H   a   r   d  sp   D   i   s   k nul   R   e
0000460   a   d nul  sp   E   r   r   o   r  cr  nl nul   ; soh nul   4
0000500  so   M dle   ,   < nul   u   t   C nul nul nul nul nul nul nul
0000520 nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul
*
0000760 nul nul nul nul nul nul nul nul nul nul nul nul nul nul   U   *
0001000
There is second area of duplicated data in this output, between addresses 0000020 and 0000120. Because that area is different from the actual boot record and it is all null in this file, we can infer that this is where the partition table is located in the actual boot record. There is also an interesting utility that enables us to just look at the ASCII text strings contained in a file:
[root@studentvm1 ~]# strings /boot/grub2/i386-pc/boot.img
TCPAf
GRUB
Geom
Hard Disk
Read
 Error

This tool is easier to use to locate actual text strings than sorting through many lines of the occasional random ASCII characters to find meaningful strings. But note that like the first line of the preceding output, not all text strings have meaning to humans.

The point here is that the GRUB boot record is installed in the first sector of the hard drive or other bootable media, using the boot.img file as the source. The partition table is then superimposed on the boot record in its specified location.

GRUB stage 1.5

As mentioned earlier, stage 1.5 of GRUB must be located in the space between the boot record and the UEFI partition data and the first partition on the disk drive. This space was left unused historically for technical and compatibility reasons and is sometimes called the “boot track” or the “MBR gap.” The first partition on the hard drive begins at sector 63, and with the MBR in sector 0, that leaves 62 512-byte sectors – 31,744 bytes – in which to store stage 1.5 of GRUB which is distributed as the core.img file. The core.img file is 28,535 bytes as of this writing, so there is plenty of space available between the MBR and the first disk partition in which to store it.

Experiment 16-2

The file containing stage 1.5 of GRUB is stored as /boot/grub2/i386-pc/core.img. You can verify this as we did earlier with stage 1 by comparing the code in the file from that stored in the MBR gap of the boot drive:
[root@studentvm1 ~]# dd if=/dev/sda bs=512 count=1 skip=1 | od -a -
1+0 records in
1+0 records out
512 bytes copied, 0.000132697 s, 3.9 MB/s
0000000   R   ?   t soh   f   1   @  vt   E  bs   f   A   `  ht   f   #
0000020   l soh   f  vt   - etx   }  bs nul  si eot   d nul nul   | del
0000040 nul   t   F   f  vt  gs   f  vt   M eot   f   1   @   0 del   9
0000060   E  bs del etx  vt   E  bs   )   E  bs   f soh enq   f etx   U
0000100 eot nul   G eot dle nul  ht   D stx   f  ht     bs   f  ht   L
0000120  ff   G   D ack nul   p   P   G   D eot nul nul   4   B   M dc3
0000140  si stx    nul   ; nul   p   k   h   f  vt   E eot   f  ht   @
0000160  si enq   D nul   f  vt enq   f   1   R   f   w   4  bs   T  nl
0000200   f   1   R   f   w   t eot  bs   T  vt  ht   D  ff   ;   D  bs
0000220  si  cr   $ nul  vt eot   *   D  nl   9   E  bs del etx  vt   E
0000240  bs   )   E  bs   f soh enq   f etx   U eot nul  nl   T  cr   @
0000260   b ack  nl   L  nl   ~   A  bs   Q  nl   l  ff   Z   R  nl   t
0000300  vt   P   ; nul   p  so   C   1   [   4 stx   M dc3   r   q  ff
0000320   C  so   E  nl   X   A   ` enq soh   E  nl   `  rs   A   ` etx
0000340  ht   A   1 del   1   v  so   [   |   s   %  us   >   V soh   h
0000360 ack nul   a etx   }  bs nul  si enq   " del etx   o  ff   i dc4
0000400 del   `   8 bel   ;   ; nul nul  so   C   f   1 del   ? nul stx
0000420   f   ;   T   C   P   A   f   >   l soh nul nul   g   f  vt  so
0000440   f   1   v   f   :  ht nul nul nul   M sub   a   >   X soh   h
0000460   F nul   Z   j nul stx nul nul   >   [ soh   h   : nul   k ack
0000500   >   ` soh   h   2 nul   >   e soh   h   , nul   k   ~   l   o
0000520   a   d   i   n   g nul   . nul  cr  nl nul   G   e   o   m nul
0000540   R   e   a   d nul  sp   E   r   r   o   r nul nul nul nul nul
0000560   ; soh nul   4  so   M dle   F  nl eot   < nul   u   r   C nul
0000600 nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul
*
0000760 nul nul nul nul stx nul nul nul nul nul nul nul   o nul  sp  bs
0001000
   [root@studentvm1 ~]# dd if=/boot/grub2/i386-pc/core.img bs=512 count=1 | od -a -
1+0 records in
1+0 records out
512 bytes copied, 5.1455e-05 s, 10.0 MB/s
0000000   R   ?   t soh   f   1   @  vt   E  bs   f   A   `  ht   f   #
0000020   l soh   f  vt   - etx   }  bs nul  si eot   d nul nul   | del
0000040 nul   t   F   f  vt  gs   f  vt   M eot   f   1   @   0 del   9
0000060   E  bs del etx  vt   E  bs   )   E  bs   f soh enq   f etx   U
0000100 eot nul   G eot dle nul  ht   D stx   f  ht     bs   f  ht   L
0000120  ff   G   D ack nul   p   P   G   D eot nul nul   4   B   M dc3
0000140  si stx    nul   ; nul   p   k   h   f  vt   E eot   f  ht   @
0000160  si enq   D nul   f  vt enq   f   1   R   f   w   4  bs   T  nl
0000200   f   1   R   f   w   t eot  bs   T  vt  ht   D  ff   ;   D  bs
0000220  si  cr   $ nul  vt eot   *   D  nl   9   E  bs del etx  vt   E
0000240  bs   )   E  bs   f soh enq   f etx   U eot nul  nl   T  cr   @
0000260   b ack  nl   L  nl   ~   A  bs   Q  nl   l  ff   Z   R  nl   t
0000300  vt   P   ; nul   p  so   C   1   [   4 stx   M dc3   r   q  ff
0000320   C  so   E  nl   X   A   ` enq soh   E  nl   `  rs   A   ` etx
0000340  ht   A   1 del   1   v  so   [   |   s   %  us   >   V soh   h
0000360 ack nul   a etx   }  bs nul  si enq   " del etx   o  ff   i dc4
0000400 del   `   8 bel   ;   ; nul nul  so   C   f   1 del   ? nul stx
0000420   f   ;   T   C   P   A   f   >   l soh nul nul   g   f  vt  so
0000440   f   1   v   f   :  ht nul nul nul   M sub   a   >   X soh   h
0000460   F nul   Z   j nul stx nul nul   >   [ soh   h   : nul   k ack
0000500   >   ` soh   h   2 nul   >   e soh   h   , nul   k   ~   l   o
0000520   a   d   i   n   g nul   . nul  cr  nl nul   G   e   o   m nul
0000540   R   e   a   d nul  sp   E   r   r   o   r nul nul nul nul nul
0000560   ; soh nul   4  so   M dle   F  nl eot   < nul   u   r   C nul
0000600 nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul
*
0000760 nul nul nul nul stx nul nul nul nul nul nul nul   7 nul  sp  bs
0001000
[root@studentvm1 ~]#

The first sector of each will do for verification, but you should feel free to explore more of the code if you like. There are tools that we could use to compare the file with the data in GRUB stage 1.5 on the hard drive, but it is obvious that these two sectors of data are identical.

At this point we know the files that contain stages 1 and 1.5 of the GRUB bootloader and where they are located on the hard drive in order to perform their function as the Linux bootloader.

Because of the larger amount of code that can be accommodated for stage 1.5 than for stage 1, it can have enough code to contain a few common filesystem drivers, such as the standard EXT, XFS, and other Linux filesystems like FAT and NTFS. The GRUB2 core.img is much more complex and capable than the older GRUB1 stage 1.5. This means that stage 2 of GRUB2 can be located on a standard EXT filesystem, but it cannot be located on a logical volume because it needs to be read from a specific location on the bootable volume before the filesystem drivers have been loaded.

Note that the /boot directory must be located on a filesystem that is supported by GRUB such as EXT4. Not all filesystems are. The function of stage 1.5 is to begin execution with the filesystem drivers necessary to locate the stage 2 files in the /boot filesystem and load the needed drivers.

GRUB stage 2

All of the files for GRUB stage 2 are located in the /boot/grub2 directory and its subdirectories. GRUB2 does not have an image file like stages 1 and 2. Instead, it consists of those files and runtime kernel modules that are loaded as needed from the /boot/grub2 directory and its subdirectories. Some Linux distributions may store these files in the /boot/grub directory.

The function of GRUB stage 2 is to locate and load a Linux kernel into RAM and turn control of the computer over to the kernel. The kernel and its associated files are located in the /boot directory. The kernel files are identifiable as they are all named starting with vmlinuz. You can list the contents of the /boot directory to see the currently installed kernels on your system.

Experiment 16-3

Your list of Linux kernels should be similar to the ones on my VM, but the kernel versions and probably the releases will be different. You should be using the most recent release of Fedora on your VM, so it should be release 29 or even higher by the time you installed your VMs. That should make no difference to these experiments:
[root@studentvm1 ~]# ll /boot
total 187716
-rw-r--r--. 1 root root     196376   Apr 23    2018   config-4.16.3-301.fc28.x86_64
-rw-r--r--. 1 root root     196172  Aug 15   08:55   config-4.17.14-202.fc28.x86_64
-rw-r--r--  1 root root     197953   Sep 19   23:02   config-4.18.9-200.fc28.x86_64
drwx------. 4 root root       4096   Apr 30    2018   efi
-rw-r--r--. 1 root root     184380   Jun 28   10:55   elf-memtest86+-5.01
drwxr-xr-x. 2 root root       4096   Apr 25    2018   extlinux
drwx------. 6 root root       4096   Sep 23   21:52   grub2
-rw-------. 1 root root   72032025   Aug 13   16:23   initramfs-0-rescue-7f12524278bd40e9b10a085bc82dc504.img
-rw-------. 1 root root   24768511   Aug 13   16:24   initramfs-4.16.3-301.fc28.x86_64.img
-rw-------. 1 root root   24251484   Aug 18   10:46   initramfs-4.17.14-202.fc28.x86_64.img
-rw-------  1 root root   24313919   Sep 23   21:52   initramfs-4.18.9-200.fc28.x86_64.img
drwxr-xr-x. 3 root root       4096   Apr 25    2018   loader
drwx------. 2 root root      16384   Aug 13   16:16   lost+found
-rw-r--r--. 1 root root     182704   Jun 28   10:55   memtest86+-5.01
-rw-------. 1 root root    3888620   Apr 23    2018   System.map-4.16.3-301.fc28.x86_64
-rw-------. 1 root root    4105662   Aug 15   08:55   System.map-4.17.14-202.fc28.x86_64
-rw-------  1 root root    4102469   Sep 19   23:02   System.map-4.18.9-200.fc28.x86_64
-rwxr-xr-x. 1 root root    8286392   Aug 13   16:23   vmlinuz-0-rescue-7f12524278bd40e9b10a085bc82dc504
-rwxr-xr-x. 1 root root    8286392   Apr 23    2018   vmlinuz-4.16.3-301.fc28.x86_64
-rwxr-xr-x. 1 root root    8552728   Aug 15   08:56   vmlinuz-4.17.14-202.fc28.x86_64
-rwxr-xr-x  1 root root    8605976   Sep 19   23:03   vmlinuz-4.18.9-200.fc28.x86_64
[root@studentvm1 ~]#

You can see that there are four kernels and their supporting files in this list. The System.map files are symbol tables that map the physical addresses of the symbols such as variables and functions. The initramfs files are used early in the Linux boot process before the filesystem drivers have been loaded and the filesystems mounted.

GRUB supports booting from one of a selection of installed Linux kernels. The Red Hat Package Manager, DNF, supports keeping multiple versions of the kernel so that if a problem occurs with the newest one, an older version of the kernel can be booted. As shown in Figure 16-1, GRUB provides a pre-boot menu of the installed kernels, including a rescue option and, if configured, a recovery option for each kernel.
../images/473415_1_En_16_Chapter/473415_1_En_16_Fig1_HTML.jpg
Figure 16-1

The GRUB boot menu allows selection of a different kernel

The default kernel is always the most recent one that has been installed during updates, and it will boot automatically after a short timeout of five seconds. If the up and down arrows are pressed, the countdown stops, and the highlight bar moves to another kernel. Press Enter to boot the selected kernel.

If almost any key other than the up and down arrow keys or the “e” or “c” keys are pressed, the countdown stops and waits for more input. Now you can take your time to use the arrow keys to select a kernel to boot and then press the Enter key to boot from it. Stage 2 of GRUB loads the selected kernel into memory and turns control of the computer over to the kernel.

The rescue boot option is intended as a last resort when attempting to resolve boot severe problems – ones which prevent the Linux system from completing the boot process. When some types of errors occur during boot, GRUB will automatically fall back to boot from the rescue image.

The GRUB menu entries for installed kernels has been useful to me. Before I became aware of VirtualBox I used to use some commercial virtualization software that sometimes experienced problems when the Linux was updated. Although the company tried to keep up with kernel variations, they eventually stopped updating their software to run with every kernel version. Whenever they did not support a kernel version to which I had updated, I used the GRUB menu to select an older kernel which I knew would work. I did discover that maintaining only three older kernels was not always enough, so I configured the DNF package manager to save up to ten kernels. DNF package manager configuration is covered in Volume 1, Chapter 12.

Configuring GRUB

GRUB is configured with /boot/grub2/grub.cfg, but we do not change that file because it can get overwritten when the kernel is updated to a new version. Instead, we make modifications to the /etc/default/grub file.

Experiment 16-4

Let’s start by looking at the unmodified version of the /etc/default/grub file:
[root@studentvm1 ~]# cd /etc/default ; cat grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="resume=/dev/mapper/fedora_studentvm1-swap rd.lvm.lv=fedora_studentvm1/root rd.lvm.lv=fedora_studentvm1/swap rd.lvm.lv=fedora_studentvm1/usr rhgb quiet"
GRUB_DISABLE_RECOVERY="true"
[root@studentvm1 default]#

Chapter 6 of the GRUB documentation referenced in footnote 6 contains a complete listing of all the possible entries in the /etc/default/grub file, but there are three that we should look at here.

I always change GRUB_TIMEOUT, the number of seconds for the GRUB menu countdown, from five to ten which gives a bit more time to respond to the GRUB menu before the countdown hits zero.

I also change GRUB_DISABLE_RECOVERY from “true” to “false” which is a bit of reverse programmer logic. I have found that the rescue boot option does not always work. To circumvent this problem, I change this statement to allow the grub2-mkconfig command to generate a recovery option for each installed kernel; I have found that when the rescue option fails, these options do work. This also provides recovery kernels for use in case a particular tool or software package that needs to run on a specific kernel version is able to do so.

Note Changing GRUB_DISABLE_RECOVERY in the grub default configuration no longer works starting in Fedora 30. The other changes, GRUB_TIMEOUT and removing “rhgb quiet” from the GRUB_CMDLINE_LINUX variable, still work.

The GRUB_CMDLINE_LINUX line can be changed, too. This line lists the command-line parameters that are passed to the kernel at boot time. I usually delete the last two parameters on this line. The rhgb parameter stands for Red Hat Graphical Boot, and it causes the little graphical animation of the Fedora icon to display during the kernel initialization instead of showing boot time messages. The quiet parameter prevents the display of the startup messages that document the progress of the startup and any errors that might occur. Delete both of these entries because SysAdmins need to be able to see these messages. If something goes wrong during boot, the messages displayed on the screen can point us to the cause of the problem.

Change these three lines as described so that your grub file looks like this:
[root@studentvm1 default]# cat grub
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="resume=/dev/mapper/fedora_studentvm1-swap rd.lvm.lv=fedora_studentvm1/root rd.lvm.lv=fedora_studentvm1/swap rd.lvm.lv=fedora_studentvm1/usr"
GRUB_DISABLE_RECOVERY="false"
[root@studentvm1 default]#
Check the current content of the /boot/grub2/grub.cfg file. Run the following command to update the /boot/grub2/grub.cfg configuration file:
[root@studentvm1 grub2]# grub2-mkconfig > /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.18.9-200.fc28.x86_64
Found initrd image: /boot/initramfs-4.18.9-200.fc28.x86_64.img
Found linux image: /boot/vmlinuz-4.17.14-202.fc28.x86_64
Found initrd image: /boot/initramfs-4.17.14-202.fc28.x86_64.img
Found linux image: /boot/vmlinuz-4.16.3-301.fc28.x86_64
Found initrd image: /boot/initramfs-4.16.3-301.fc28.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-7f12524278bd40e9b10a085bc82dc504
Found initrd image: /boot/initramfs-0-rescue-7f12524278bd40e9b10a085bc82dc504.img
done
[root@studentvm1 grub2]#

Recheck the content of /boot/grub2/grub.cfg which should reflect the changes we made. You can grep for the specific lines we changed to verify that the changes occurred. We could also use an alternative form of this command to specify the output file. grub2-mkconfig -o /boot/grub2/grub.cfg Either form works, and the results are the same.

Reboot the StudentVM1 virtual machine. Press the Esc key when the GRUB menu is displayed. The first difference you should notice in the GRUB menu is that the countdown timer started at ten seconds. The GRUB menu should now appear similar to that shown in Figure 16-2 with a recovery option for each kernel version. The details of your menu will be different from these.
../images/473415_1_En_16_Chapter/473415_1_En_16_Fig2_HTML.jpg
Figure 16-2

After changing /etc/default/grub and running grub2-mkconfig, the GRB menu now contains a recovery mode option for each kernel

Use the down arrow key to highlight the recovery option for the default kernel – the second option – and press the Enter key to complete the boot and startup process. This will take you into recovery mode using that kernel. You will also notice many messages displayed on the screen as the system boots and goes through startup. Some of these messages can be seen in Figure 16-3 along with messages pertaining to the rescue shell.

Based on these messages, we can conclude that “recovery” mode is a rescue mode in which we get to choose the kernel version. The system displays a login message:
Give root password for maintenance
(or press Control-D to continue):

Type the root password to log in. There are also instructions on the screen in case you want to reboot or continue into the default runlevel target.

Notice also at the bottom of the screen in Figure 16-3 that the little trail of messages we will embed in the bash startup configuration files in Chapter 17 shows here that the /etc/bashrc and /etc/profile.d/myBashConfig.sh files – along with all of the other bash configuration files in /etc/profile.d – were run at login. I have skipped ahead a bit with this, but I will show you how to test it yourself in Chapter 17. This is good information to have because you will know what to expect in the way of shell configuration while working in recovery mode.

While in recovery mode, explore the system while it is in the equivalent of what used to be called single user mode. The lsblk utility will show that all of the filesystems are mounted in their correct locations and the ip addr command will show that networking has not been started. The computer is up and running, but it is in a very minimal mode of operation. Only the most essential services are available to enable pxroblem solving. The runlevel command will show that the host is in the equivalent of the old SystemV runlevel 1.
../images/473415_1_En_16_Chapter/473415_1_En_16_Fig3_HTML.jpg
Figure 16-3

After booting to a recovery mode kernel, you use the root password to enter maintenance mode

Before completing this experiment, reboot your VM to one of the older regular kernels, and log in to the desktop. Test a few programs, and then open a terminal session to test some command-line utilities. Everything should work without a problem because the kernel version is not bound to specific versions of the rest of the Linux operating system. Running an alternate kernel is easy and commonplace.

To end this experiment, reboot the system and allow the default kernel to boot. No intervention will be required. Youx will see all of the kernel boot and startup messages during this normal boot.

There are three different terms that are typically applied to recovery mode: recovery, rescue, and maintenance. These are all functionally the same. Maintenance mode is typically used when the Linux host fails to boot to its default target due to some error that occurs during the boot and startup. Being able to see the boot and startup messages if an error occurs can also provide clues as to where the problem might exist.

I have found that the rescue kernel, the option at the bottom of the GRUB menu in Figures 16-1, 16-2, and 16-3, almost never works and I have tried it on a variety of physical hardware and virtual machines, and it always fails. So I need to use the recovery kernels, and that is why I configure GRUB to create those recovery menu options.

In Figure 16-2, after configuring GRUB and running the grub2-mkconfig -o /boot/grub2/grub.cfg command, there are two rescue mode menu options. In my testing I have discovered that the top rescue mode menu option fails but that the bottom rescue mode menu option, the one we just created, does work. But it really does not seem to matter because, as I have said, both rescue and recovery modes provide exactly the same function. This problem is a bug, probably in GRUB, so I reported it to Red Hat using Bugzilla.9

Part of our responsibility as SysAdmins, and part of giving back to the open source community, is to report bugs when we encounter them. Anyone can create an account and log in to report bugs. Updates will be sent to you by e-mail whenever a change is made to the bug report.

The Linux kernel

All Linux kernels are in a self-extracting, compressed format to save space. The kernels are located in the /boot directory, along with an initial RAM disk image and symbol maps. After the selected kernel is loaded into memory by GRUB and begins executing, it must first extract itself from the compressed version of the file before it can perform any useful work. The kernel has extracted itself, loads systemd, and turns control over to it.

This is the end of the boot process. At this point, the Linux kernel and systemd are running but unable to perform any productive tasks for the end user because nothing else is running, no shell to provide a command line, no background processes to manage the network or other communication links, and nothing that enables the computer to perform any productive function.

Linux startup

The startup process follows the boot process and brings the Linux computer up to an operational state in which it is usable for productive work. The startup process begins when the kernel transfers control of the host to systemd.

systemd

systemd10,11 is the mother of all processes, and it is responsible for bringing the Linux host up to a state in which productive work can be done. Some of its functions, which are far more extensive than the old SystemV12 init program, are to manage many aspects of a running Linux host, including mounting filesystems and starting and managing system services required to have a productive Linux host. Any of systemd’s tasks that are not related to the startup sequence are outside the scope of this chapter, but we will explore them in Volume 2, Chapter 13.

First systemd mounts the filesystems as defined by /etc/fstab, including any swap files or partitions. At this point, it can access the configuration files located in /etc, including its own. It uses its configuration link, /etc/systemd/system/default.target, to determine which state or target, into which it should boot the host. The default.target file is a symbolic link to the true target file. For a desktop workstation, this is typically going to be the graphical.target, which is equivalent to runlevel 5 in SystemV. For a server, the default is more likely to be the multi-user.target which is like runlevel 3 in SystemV. The emergency.target is similar to single user mode. Targets and services are systemd units.

Figure 16-4 is a comparison of the systemd targets with the old SystemV startup runlevels. The systemd target aliases are provided by systemd for backward compatibility. The target aliases allow scripts — and many SysAdmins like myself — to use SystemV commands like init 3 to change runlevels. Of course the SystemV commands are forwarded to systemd for interpretation and execution.
../images/473415_1_En_16_Chapter/473415_1_En_16_Fig4_HTML.png
Figure 16-4

Comparison of SystemV runlevels with systemd targets and some target aliases

Each target has a set of dependencies described in its configuration file. systemd starts the required dependencies. These dependencies are the services required to run the Linux host at a specific level of functionality. When all of the dependencies listed in the target configuration files are loaded and running, the system is running at that target level.

systemd also looks at the legacy SystemV init directories to see if any startup files exist there. If so, systemd used those as configuration files to start the services described by the files. The deprecated network service is a good example of one of those that still use SystemV startup files in Fedora.

Figure 16-5 is copied directly from the bootup man page. It shows a map of the general sequence of events during systemd startup and the basic ordering requirements to ensure a successful startup.

The sysinit.target and basic.target targets can be considered as checkpoints in the startup process. Although systemd has as one of its design goals to start system services in parallel, there are still certain services and functional targets that must be started before other services and targets can be started. These checkpoints cannot be passed until all of the services and targets required by that checkpoint are fulfilled.

The sysinit.target is reached when all of the units on which it depends are completed. All of those units, mounting filesystems, setting up swap files, starting udev, setting the random generator seed, initiating low-level services, and setting up cryptographic services if one or more filesystems are encrypted, must be completed, but within the sysinit.target, those tasks can be performed in parallel.

The sysinit.target starts up all of the low-level services and units required for the system to be marginally functional, and that are required to enable moving on to the basic.target.

After the sysinit.target is fulfilled, systemd next starts the basic.target, starting all of the units required to fulfill it. The basic target provides some additional functionality by starting units that are required for all of the next targets. These include setting up things like paths to various executable directories, communication sockets, and timers.

Finally, the user-level targets, multi-user.target, or graphical.target can be initialized. The multi-user.target must be reached before the graphical target dependencies can be met. The underlined targets in Figure 16-5 are the usual startup targets. When one of these targets is reached, then startup has completed. If the multi-user.target is the default, then you should see a text mode login on the console. If graphical.target is the default, then you should see a graphical login; the specific GUI login screen you see will depend upon the default display manager.
../images/473415_1_En_16_Chapter/473415_1_En_16_Fig5_HTML.png
Figure 16-5

The systemd startup map

The bootup man page also describes and provides maps of the boot into the initial RAM disk and the systemd shutdown process.

Experiment 16-5

So far we have only booted to the graphical.target, so let’s change the default target to multi-user.target to boot into a console interface rather than a GUI interface.

As the root user on StudentVM1, change to the directory in which systemd configuration is maintained and do a quick listing:
[root@studentvm1 ~]# cd /etc/systemd/system/ ; ll
drwxr-xr-x. 2 root root 4096 Apr 25  2018  basic.target.wants
<snip>
lrwxrwxrwx. 1 root root   36 Aug 13 16:23  default.target -> /lib/systemd/system/graphical.target
lrwxrwxrwx. 1 root root   39 Apr 25  2018  display-manager.service -> /usr/lib/systemd/system/lightdm.service
drwxr-xr-x. 2 root root 4096 Apr 25  2018  getty.target.wants
drwxr-xr-x. 2 root root 4096 Aug 18 10:16  graphical.target.wants
drwxr-xr-x. 2 root root 4096 Apr 25  2018  local-fs.target.wants
drwxr-xr-x. 2 root root 4096 Oct 30 16:54  multi-user.target.wants
<snip>
[root@studentvm1 system]#

I have shortened this listing to highlight a few important things that will help us understand how systemd manages the boot process. You should be able to see the entire list of directories and links on your VM.

The default.target entry is a symbolic link13 (symlink, soft link) to the directory, /lib/systemd/system/graphical.target. List that directory to see what else is there:
[root@studentvm1 system]# ll /lib/systemd/system/ | less
You should see files, directories, and more links in this listing, but look for multi-user.target and graphical.target. Now display the contents of default.target which is a link to /lib/systemd/system/graphical.target:
[root@studentvm1 system]# cat default.target
#  SPDX-License-Identifier: LGPL-2.1+
#
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
[Unit]
Description=Graphical Interface
Documentation=man:systemd.special(7)
Requires=multi-user.target
Wants=display-manager.service
Conflicts=rescue.service rescue.target
After=multi-user.target rescue.service rescue.target display-manager.service
AllowIsolate=yes
[root@studentvm1 system]#
This link to the graphical.target file now describes all of the prerequisites and needs that the graphical user interface requires. To enable the host to boot to multiuser mode, we need to delete the existing link and then create a new one that points to the correct target. Make PWD /etc/systemd/system if it is not already:
[root@studentvm1 system]# rm -f default.target
[root@studentvm1 system]# ln -s /lib/systemd/system/multi-user.target default.target
List the default.target link to verify that it links to the correct file:
[root@studentvm1 system]# ll default.target
lrwxrwxrwx 1 root root 37 Nov 28 16:08 default.target -> /lib/systemd/system/multi-user.target
[root@studentvm1 system]#
If your link does not look exactly like that, delete it and try again. List the content of the default.target link:
[root@studentvm1 system]# cat default.target
#  SPDX-License-Identifier: LGPL-2.1+
#
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
[Unit]
Description=Multi-User System
Documentation=man:systemd.special(7)
Requires=basic.target
Conflicts=rescue.service rescue.target
After=basic.target rescue.service rescue.target
AllowIsolate=yes
[root@studentvm1 system]#

The default.target has different requirements in the [Unit] section. It does not require the graphical display manager.

Reboot. Your VM should boot to the console login for virtual console 1 which is identified on the display as tty1. Now that you know what is necessary to change the default target, change it back to the graphical.target using a command designed for the purpose. Let’s first check the current default target:
[root@studentvm1 ~]# systemctl get-default
multi-user.target
[root@studentvm1 ~]# systemctl set-default graphical.target
Removed /etc/systemd/system/default.target.
Created symlink /etc/systemd/system/default.target → /usr/lib/systemd/system/graphical.target.
[root@studentvm1 ~]#
Type the following command to go directly to the display manager login page without having to reboot:
[root@studentvm1 system]# systemctl isolate default.target

I am unsure why the term “isolate” was chosen for this subcommand by the developers of systemd. However the effect is to switch targets from one run target to another, in this case from the emergency target to the graphical target. The preceding command is equivalent to the old init 5 command in the days of SystemV start scripts and the init program.

Log in to the GUI desktop.

We will explore systemd in more detail in Chapter 13 of Volume 2.

GRUB and the systemd init system are key components in the boot and startup phases of most modern Linux distributions. These two components work together smoothly to first load the kernel and then to start up all of the system services required to produce a functional GNU/Linux system.

Although I do find both GRUB and systemd more complex than their predecessors, they are also just as easy to learn and manage. The man pages have a great deal of information about systemd, and freedesktop.org has a web site that describes the complete startup process14 and a complete set of systemd man pages15 online.

Graphical login screen

There are still two components that figure in to the very end of the boot and startup process for the graphical.target, the display manager (dm) and the window manager (wm). These two programs, regardless of which ones you use on your Linux GUI desktop system, always work closely together to make your GUI login experience smooth and seamless before you even get to your desktop.

Display manager

The display manager16 is a program with the sole function of providing the GUI login screen for your Linux desktop. After you log in to a GUI desktop, the display manager turns control over to the window manager. When you log out of the desktop, the display manager is given control again to display the login screen and wait for another login.

There are several display managers; some are provided with their respective desktops. For example, the kdm display manager is provided with the KDE desktop. Many display managers are not directly associated with a specific desktop. Any of the display managers can be used for your login screen regardless of which desktop you are using. And not all desktops have their own display managers. Such is the flexibility of Linux and well-written, modular code.

The typical desktops and display managers are shown in Figure 16-6. The display manager for the first desktop that is installed, that is, GNOME, KDE, etc., becomes the default one. For Fedora, this is usually gdm which is the display manager for GNOME. If GNOME is not installed, then the display manager for the installed desktop is the default. If the desktop selected during installation does not have a default display manager, then gdm is installed and used. If you use KDE as your desktop, the new SDDM17 will be the default display manager.
../images/473415_1_En_16_Chapter/473415_1_En_16_Fig6_HTML.png
Figure 16-6

A short list of display managers

Regardless of which display manager is configured as the default at installation time, later installation of additional desktops does not automatically change the display manager used. If you want to change the display manager, you must do it yourself from the command line. Any display manager can be used, regardless of which window manager and desktop are used.

Window manager

The function of a window manager18 is to manage the creation, movement, and destruction of windows on a GUI desktop including the GUI login screen. The window manager works with the Xwindow19 system or the newer Wayland20 to perform these tasks. The Xwindow system provides all of the graphical primitives and functions to generate the graphics for a Linux or Unix graphical user interface.

The window manager also controls the appearance of the windows it generates. This includes the functional decorative aspects of the windows, such as the look of buttons, sliders, window frames, pop-up menus, and more.

As with almost every other component of Linux, there are many different window managers from which to choose. The list in Figure 16-7 represents only a sample of the available window managers. Some of these window managers are stand-alone, that is, they are not associated with a desktop and can be used to provide a simple graphical user interface without the more complex, feature-rich, and more resource-intensive overhead of a full desktop environment. Stand-alone window managers should not be used with any of the desktop environments.
../images/473415_1_En_16_Chapter/473415_1_En_16_Fig7_HTML.png
Figure 16-7

A short list of window managers

Most window managers are not directly associated with any specific desktop. In fact some window managers can be used without any type of desktop software, such as KDE or GNOME, to provide a very minimalist GUI experience for users. Many desktop environments support the use of more than one window manager.

How do I deal with all these choices?

In most modern distributions, the choices are made for you at installation time and are based on your selection of desktops and the preferences of the packagers of your distribution. The desktop and window managers and the display manager can be easily changed.

Now that systemd has become the standard startup system in many distributions, you can set the preferred display manager in /etc/systemd/system which is where the basic system startup configuration is located. There is a symbolic link (symlink) named display-manager.service that points to one of the display manager service units in /usr/lib/systemd/system. Each installed display manager has a service unit located there. To change the active display manager, remove the existing display-manager.service link, and replace it with the one you want to use.

Experiment 16-6

Perform this experiment as root. We will install additional display managers and stand-alone window managers then switch between them.

Check and see which display managers are already installed. The RPMs in which the window managers are packaged have inconsistent naming, so it is difficult to locate them using a simple DNF search unless you already know their RPM package names which, after a bit of research, I do:
[root@studentvm1 ~]# dnf list compiz fluxbox fvwm icewm xorg-x11-twm xfwm4
Last metadata expiration check: 1:00:54 ago on Thu 29 Nov 2018 11:31:21 AM EST.
Installed Packages
xfwm4.x86_64                 4.12.5-1.fc28                      @updates
Available Packages
compiz.i686                  1:0.8.14-5.fc28                    fedora
compiz.x86_64                1:0.8.14-5.fc28                    fedora
fluxbox.x86_64               1.3.7-4.fc28                       fedora
fvwm.x86_64                  2.6.8-1.fc28                       updates
icewm.x86_64                 1.3.8-15.fc28                      fedora
xorg-x11-twm.x86_64          1:1.0.9-7.fc28                     fedora
[root@studentvm1 ~]#
Now let’s look for the display managers:
[root@studentvm1 ~]# dnf list gdm kdm lightdm lxdm sddm xfdm xorg-x11-xdm
Last metadata expiration check: 2:15:20 ago on Thu 29 Nov 2018 11:31:21 AM EST.
Installed Packages
lightdm.x86_64               1.28.0-1.fc28                      @updates
Available Packages
gdm.i686                     1:3.28.4-1.fc28                    updates
gdm.x86_64                   1:3.28.4-1.fc28                    updates
kdm.x86_64                   1:4.11.22-22.fc28                  fedora
lightdm.i686                 1.28.0-2.fc28                      updates
lightdm.x86_64               1.28.0-2.fc28                      updates
lxdm.x86_64                  0.5.3-10.D20161111gita548c73e.fc28 fedora
sddm.i686                    0.17.0-3.fc28                      updates
sddm.x86_64                  0.17.0-3.fc28                      updates
xorg-x11-xdm.x86_64          1:1.1.11-16.fc28                   fedora
[root@studentvm1 ~]#
Each dm is started as a systemd service, so another way to determine which ones are installed is to check the /usr/lib/systemd/system/ directory. The lightdm display manager shows up twice as installed and available because there is an update for it:
[root@studentvm1 ~]# cd /usr/lib/systemd/system/ ; ll *dm.service
-rw-r--r-- 1 root root 1059 Sep  1 11:38 lightdm.service
[root@studentvm1 system]#
Like my VM, yours should have only a single dm, the lightdm. Let’s install lxdm and xorg-x11-xdm as the additional display managers, with FVWM, fluxbox, and icewm for window managers:
[root@studentvm1 ~]# dnf install -y lxdm xorg-x11-xdm compiz fvwm fluxbox icewm
Now we must restart the display manager service so that the newly installed window managers in the display manager selection tool. The simplest way is to log out of the desktop and restart the dm from a virtual console session:
[root@studentvm1 ~]# systemctl restart display-manager.service
Or we could do this by switching to the multiuser target and then back to the graphical target. Do this, too, just to see what switching between these targets looks like:
[root@studentvm1 ~]# systemctl isolate multi-user.target
[root@studentvm1 ~]# systemctl isolate graphical.target
But this second method is a lot more typing. Switch back to the lightdm login on vc1, and look in the upper right corner of the lightdm login screen. The leftmost icon, which on my VM looks like a sheet of paper with a wrench,21 allows us to choose the desktop or window manager we want to use before we log in. Click this icon and choose FVWM from the menu in Figure 16-8, then log in.
../images/473415_1_En_16_Chapter/473415_1_En_16_Fig8_HTML.jpg
Figure 16-8

The lightdm display manager menu now shows the newly installed window managers

Explore this window manager. Open an Xterm instance, and locate the menu option that gives access to application programs. Figure 16-9 shows the Fvwm desktop (this is not a desktop environment like KDE or GNOME) with an open Xterm instance and a menu tree that is opened with a left click on the display. A different menu is opened with a right-click.

Fvwm is a very basic but usable window manager. Like most window managers, it provides menus to access various functions and a graphical display that supports simple windowing functionality. Fvwm also provides multiple windows in which to run programs for some task management capabilities.

Notice that the XDGMenu in Figure 16-9 also contains Xfce applications. The Start Here menu item leads to the Fvwm menus that include all of the standard Linux applications that are installed on the host.
../images/473415_1_En_16_Chapter/473415_1_En_16_Fig9_HTML.jpg
Figure 16-9

The Fvwm window manager with an Xterm instance and some of the available menus

After spending a bit of time exploring the Fvwm interface, log out. Can’t find the way to do that? Neither could I as it is very nonintuitive. Left-click the desktop and open the FvwmConsole. Then type in the command Quit – yes, with the uppercase Q – and press Enter.

We could also open an Xterm session and use the following command which kills all instances of the Fvwm window manager belonging to the student user:
[student@studentvm1 ~]# killall fvwm

Try each of the other window managers, exploring the basic functions of launching applications and a terminal session. When you have finished that, exit whichever window manager you are in, and log in again using the Xfce desktop environment.

Now let’s change the display manager to one of the new ones we have installed. Each dm has the same function, to provide a GUI for login and some configuration such as the desktop environment or window manager to start as the user interface. Change into the /etc/systemd/system/ directory, and list the link for the display manager service:
[root@studentvm1 ~]# cd /etc/systemd/system/ ; ll display-manager.service
total 60
lrwxrwxrwx. 1 root root   39 Apr 25  2018  display-manager.service -> /usr/lib/systemd/system/lightdm.service
Locate all of the display manager services in /usr/lib/systemd/system/:
[root@studentvm1 system]# ll /usr/lib/systemd/system/*dm.service
-rw-r--r-- 1 root root 1059 Sep 26 11:04 /usr/lib/systemd/system/lightdm.service
-rw-r--r-- 1 root root  384 Feb 14  2018 /usr/lib/systemd/system/lxdm.service
-rw-r--r-- 1 root root  287 Feb 10  2018 /usr/lib/systemd/system/xdm.service
And make the change:
[root@studentvm1 system]# rm -f display-manager.service
[root@studentvm1 system]# [root@studentvm1 system]# ln -s /usr/lib/systemd/system/xdm.service display.manager.service
[root@studentvm1 system]# ll display-manager.service
lrwxrwxrwx 1 root root 35 Nov 30 09:03 display.manager.service -> /usr/lib/systemd/system/xdm.service
[root@studentvm1 system]#

As far as I can tell at this point, rebooting the host is the only way to reliably activate the new dm. Go ahead and reboot your VM now to do that.

There is a tool, system-switch-displaymanager, which is supposed to make the necessary changes, and it does seem to work sometimes. But this tool does not restart the dm, and many times that step fails when performed. Unfortunately, my own experiments have determined that restarting the display manager service does not activate the new dm. The following steps are supposed to work; try it to see if it works for you as you switch back to the lightdm display manager:
[root@studentvm1 ~]# dnf -y install system-switch-displaymanager
[root@studentvm1 ~]# system-switch-displaymanager lightdm
[root@studentvm1 ~]# systemctl restart display-manager.service

If the second two steps in this sequence does not work, then reboot. Jason Baker, my technical reviewer, says, “This seemed to work for me, but then it failed to actually log in to lightdm, so I had to reboot.”

Different distributions and desktops have various means of changing the window manager, but, in general, changing the desktop environment also changes the window manager to the default one for that desktop. For current releases of Fedora Linux, the desktop environment can be changed on the display manager login screen. If stand-alone display managers are also installed, they also appear in the list with the desktop environments.

There are many different choices for display and window managers available. When you install most modern distributions with any kind of desktop, the choices of which ones to install and activate are usually made by the installation program. For most users, there should never be any need to change these choices. For others who have different needs, or for those who are simply more adventurous, there are many options and combinations from which to choose. With a little research, you can make some interesting changes.

About the login

After a Linux host is turned on, it boots and goes through the startup process. When the startup process is completed, we are presented with a graphical or command-line login screen. Without a login prompt, it is impossible to log in to a Linux host.

How the login prompt is displayed and how a new one is displayed after a user logs out are the final stage of understanding the Linux startup.

CLI login screen

The CLI login screen is initiated by a program called a getty, which stands for GET TTY. The historical function of a getty was to wait for a connection from a remote dumb terminal to come in on a serial communications line. The getty program would spawn the login screen and wait for a login to occur. When the remote user would log in, the getty would terminate, and the default shell for the user account would launch and allow the user to interact with the host on the command line. When the user would log out, the init program would spawn a new getty to listen for the next connection.

Today’s process is much the same with a few updates. We now use an agetty, which is an advanced form of getty, in combination with the systemd service manager, to handle the Linux virtual consoles as well as the increasingly rare incoming modem lines. The steps listed in the following show the sequence of events in a modern Linux computer:
  1. 1.

    systemd starts the systemd-getty-generator daemon.

     
  2. 2.

    The systemd-getty-generator spawns an agetty on each of the virtual consoles using the [email protected].

     
  3. 3.

    The agettys wait for virtual console connection, which is the user switching to one of the VCs.

     
  4. 4.

    The agetty presents the text mode login screen on the display.

     
  5. 5.

    The user logs in.

     
  6. 6.

    The shell specified in /etc/passwd is started.

     
  7. 7.

    Shell configuration scripts run.

     
  8. 8.

    The user works in the shell session.

     
  9. 9.

    The user logs off.

     
  10. 10.

    The systemd-getty-generator spawns an agetty on the logged out virtual console.

     
  11. 11.

    Go to step 3.

     

Starting with step 3, this is a circular process that repeats as long as the host is up and running. New login screens are displayed on a virtual console immediately after the user logs out of the old session.

GUI login screen

The GUI login screen as displayed by the display manager is handled in much the same way as the systemd-getty-generator handles the text mode login:
  1. 1.

    The specified display manager (dm) is launched by systemd at the end of the startup sequence.

     
  2. 2.

    The display manager displays graphical login screen, usually on virtual console 1.

     
  3. 3.

    The dm waits for a login.

     
  4. 4.

    The user logs in.

     
  5. 5.

    The specified window manager is started.

     
  6. 6.

    The specified desktop GUI, if any, is started.

     
  7. 7.

    The user performs work in the window manager/desktop.

     
  8. 8.

    The user logs out.

     
  9. 9.

    systemd respawns the display manager.

     
  10. 10.

    Go to step 2.

     

The steps are almost the same, and the display manager functions as a graphical version of the agetty.

Chapter summary

We have explored the Linux boot and startup processes in some detail. This chapter explored reconfiguration of the GRUB bootloader to display the kernel boot and startup messages as well as to create recovery mode entries, ones that actually work, for the GRUB menu. Because there is a bug when attempting to boot to the rescue mode kernel, we discussed our responsibility as SysAdmins to report bugs through the appropriate channels.

We installed and explored some different window managers as an alternative to more complex desktop environments. The desktop environments do depend upon at least one of the window managers for their low-level graphical functions while providing useful, needed, and sometimes fun features. We also discovered how to change the default display manager to provide a different GUI login screen as well as how the GUI and command-line logins work.

This chapter has also been about learning the tools like dd that we used to extract the data from files and from specific locations on the hard drive. Understanding those tools and how they can be used to locate and trace data and files provides SysAdmins with skills that can be applied to exploring other aspects of Linux.

Exercises

  1. 1.

    Describe the Linux boot process.

     
  2. 2.

    Describe the Linux startup process.

     
  3. 3.

    What does GRUB do?

     
  4. 4.

    Where is stage 1 of GRUB located on the hard drive?

     
  5. 5.

    What is the function of systemd during startup?

     
  6. 6.

    Where are the systemd startup target files and links located?

     
  7. 7.

    Configure the StudentVM1 host so that the default.target is reboot.target and reboot the system.

    After watching the VM reboot a couple times, reconfigure the default.target to point to the graphical.target again and reboot.

     
  8. 8.

    What is the function of an agetty?

     
  9. 9.

    Describe the function of a display manager.

     
  10. 10.

    What Linux component attaches to a virtual console and displays the text mode login screen?

     
  11. 11.

    List and describe the Linux components involved and the sequence of events that take place when a user logs in to a virtual console until they log out.

     
  12. 12.

    What happens when the display manager service is restarted from a root terminal session on the desktop using the command systemctl restart display-manager.service?

     
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.212.246