Objectives
The difference between Linux boot and startup
What happens during the hardware boot sequence
What happens during the Linux boot sequence
What happens during the Linux startup sequence
How to manage and modify the Linux boot and startup sequences
The function of the display and window managers
How the login process works for both virtual consoles and a GUI
What happens when a user logs off
This chapter explores the hardware boot sequence, the bootup sequence using the GRUB2 bootloader, and the startup sequence as performed by the systemd initialization system. It covers in detail the sequence of events required to change the state of the computer from off to fully up and running with a user logged in.
This chapter is about modern Linux distributions like Fedora and other Red Hat–based distributions that use systemd for startup, shutdown, and system management. systemd is the modern replacement for init and SystemV init scripts.
Overview
Hardware boot which initializes the system hardware
Linux boot which loads the Linux kernel and systemd
Linux startup in which systemd makes the host ready for productive work
It is important to separate the hardware boot from the Linux boot process from the Linux startup and to explicitly define the demarcation points between them. Understanding these differences and what part each plays in getting a Linux system to a state where it can be productive makes it possible to manage these processes and to better determine the portion in which a problem is occurring during what most people refer to as “boot.”
Hardware boot
The first step of the Linux boot process really has nothing whatever to do with Linux. This is the hardware portion of the boot process and is the same for any Intel-based operating system.
When power is first applied to the computer, or the VM we have created for this course, it runs the power-on self-test (POST)1 which is part of BIOS2 or the much newer Unified Extensible Firmware Interface3 (UEFI). BIOS stands for Basic I/O System, and POST stands for power-on self-test. When IBM designed the first PC back in 1981, BIOS was designed to initialize the hardware components. POST is the part of BIOS whose task is to ensure that the computer hardware functioned correctly. If POST fails, the computer may not be usable, and so the boot process does not continue.
Most modern motherboards provide the newer UEFI as a replacement for BIOS. Many motherboards also provide legacy BIOS support. Both BIOS and UEFI perform the same functions – hardware verification and initialization, and loading the boot loader. The VM we created for this course uses a BIOS interface which is perfectly fine for our purposes.
BIOS/UEFI POST checks basic operability of the hardware. Then it locates the boot sectors on all attached bootable devices including rotating or SSD hard drives, DVD or CD-ROM, or bootable USB memory sticks like the live USB device we used to install the StudentVM1 virtual machine. The first boot sector it finds that contains a valid master boot record (MBR)4 is loaded into RAM, and control is then transferred to the RAM copy of the boot sector.
The BIOS/UEFI user interface can be used to configure the system hardware for things like overclocking, specifying CPU cores as active or inactive, specific devices from which the system might boot, and the sequence in which those devices are to be searched for a bootable boot sector. I do not create or boot from bootable CD or DVD devices any more. I only use bootable USB thumb drives to boot from external, removable devices.
Because I sometimes do boot from an external USB drive – or in the case of a VM, a bootable ISO image like that of the live USB device – I always configure my systems to boot first from the external USB device and then from the appropriate internal disk drive. This is not considered secure in most commercial environments, but then I do a lot of boots to external USB drives. If they steal the whole computer or if it is destroyed in a natural disaster, I can revert to backups5 I keep in my safe deposit box.
In most environments you will want to be more secure and set the host to boot from the internal boot device only. Use a BIOS password to prevent unauthorized users from accessing BIOS to change the default boot sequence.
Hardware boot ends when the boot sector assumes control of the system.
Linux boot
The boot sector that is loaded by BIOS is really stage 1 of the GRUB6 boot loader. The Linux boot process itself is composed of multiple stages of GRUB. We consider each stage in this section.
GRUB
GRUB2 is the newest version of the GRUB bootloader and is used much more frequently these days. We will not cover GRUB1 or LILO in this course because they are much older than GRUB2.
Because it is easier to write and say GRUB than GRUB2, I will use the term GRUB in this chapter, but I will be referring to GRUB2 unless specified otherwise. GRUB2 stands for “GRand Unified Bootloader, version 2,” and it is now the standard bootloader for most current Linux distributions. GRUB is the program which makes the computer just smart enough to find the operating system kernel and load it into memory, but it takes three stages of GRUB to do this. Wikipedia has an excellent article on GNU GRUB.7
GRUB has been designed to be compatible with the multiboot specification which allows GRUB to boot many versions of Linux and other free operating systems. It can also chain load the boot record of proprietary operating systems. GRUB can allow the user to choose to boot from among several different kernels for your Linux distribution if there are more than one present due to system updates. This affords the ability to boot to a previous kernel version if an updated one fails somehow or is incompatible with an important piece of software. GRUB can be configured using the /boot/grub/grub.conf file.
GRUB1 is now considered to be legacy and has been replaced in most modern distributions with GRUB2, which is a complete rewrite of GRUB1. Red Hat-based distros upgraded to GRUB2 around Fedora 15 and CentOS/RHEL 7. GRUB2 provides the same boot functionality as GRUB1, but GRUB2 also provides a mainframe-like command-based pre-OS environment and allows more flexibility during the pre-boot phase.
The primary function of GRUB is to get the Linux kernel loaded into memory and running. The use of GRUB2 commands within the pre-OS environment is outside the scope of this chapter. Although GRUB does not officially use the stage terminology for its three stages, it is convenient to refer to them in that way, so I will.
GRUB stage 1
As mentioned in the BIOS/UEFI POST section, at the end of POST, BIOS/UEFI searches the attached disks for a boot record, which is located in the master boot record (MBR); it loads the first one it finds into memory and then starts execution of the boot record. The bootstrap code, that is GRUB stage 1, is very small because it must fit into the first 512-byte sector on the hard drive along with the partition table.8 The total amount of space allocated for the actual bootstrap code in a classic, generic MBR is 446 bytes. The 446-byte file for stage 1 is named boot.img and does not contain the partition table. The partition table is created when the device is partitioned and is overlaid onto the boot record starting at byte 447.
In UEFI systems, the partition table has been moved out of the MBR and into the space immediately following the MBR. This provides more space for defining partitions, so it allows a larger number of partitions to be created.
Because the boot record must be so small, it is also not very smart and does not understand filesystem structures such as EXT4. Therefore the sole purpose of stage 1 is to load GRUB stage 1.5. In order to accomplish this, stage 1.5 of GRUB must be located in the space between the boot record and the UEFI partition data and the first partition on the drive. After loading GRUB stage 1.5 into RAM, stage 1 turns control over to stage 1.5.
Experiment 16-1
Use the dd command to view the boot record of the boot drive. For this experiment I assume it is assigned to the /dev/sda device. The bs= argument in the command specifies the block size, and the count= argument specifies the number of blocks to dump to STDIO. The if= argument (InFile) specifies the source of the data stream, in this case, the USB device:
Note the star (*) (splat/asterisk) between addresses 0000520 and 0000660. This indicates that all of the data in that range is the same as the last line before it, 0000520, which is all null characters. This saves space in the output stream. The addresses are in octal, which is base 8.
A generic boot record that does not contain a partition table is located in the /boot/grub2/i386-pc directory. Let’s look at the content of that file. It would not be necessary to specify the block size and the count if we used dd because we are looking at a file that already has a limited length. We can also use od directly and specify the file name rather than using the dd command, although we could do that, too.
This tool is easier to use to locate actual text strings than sorting through many lines of the occasional random ASCII characters to find meaningful strings. But note that like the first line of the preceding output, not all text strings have meaning to humans.
The point here is that the GRUB boot record is installed in the first sector of the hard drive or other bootable media, using the boot.img file as the source. The partition table is then superimposed on the boot record in its specified location.
GRUB stage 1.5
As mentioned earlier, stage 1.5 of GRUB must be located in the space between the boot record and the UEFI partition data and the first partition on the disk drive. This space was left unused historically for technical and compatibility reasons and is sometimes called the “boot track” or the “MBR gap.” The first partition on the hard drive begins at sector 63, and with the MBR in sector 0, that leaves 62 512-byte sectors – 31,744 bytes – in which to store stage 1.5 of GRUB which is distributed as the core.img file. The core.img file is 28,535 bytes as of this writing, so there is plenty of space available between the MBR and the first disk partition in which to store it.
Experiment 16-2
The first sector of each will do for verification, but you should feel free to explore more of the code if you like. There are tools that we could use to compare the file with the data in GRUB stage 1.5 on the hard drive, but it is obvious that these two sectors of data are identical.
At this point we know the files that contain stages 1 and 1.5 of the GRUB bootloader and where they are located on the hard drive in order to perform their function as the Linux bootloader.
Because of the larger amount of code that can be accommodated for stage 1.5 than for stage 1, it can have enough code to contain a few common filesystem drivers, such as the standard EXT, XFS, and other Linux filesystems like FAT and NTFS. The GRUB2 core.img is much more complex and capable than the older GRUB1 stage 1.5. This means that stage 2 of GRUB2 can be located on a standard EXT filesystem, but it cannot be located on a logical volume because it needs to be read from a specific location on the bootable volume before the filesystem drivers have been loaded.
Note that the /boot directory must be located on a filesystem that is supported by GRUB such as EXT4. Not all filesystems are. The function of stage 1.5 is to begin execution with the filesystem drivers necessary to locate the stage 2 files in the /boot filesystem and load the needed drivers.
GRUB stage 2
All of the files for GRUB stage 2 are located in the /boot/grub2 directory and its subdirectories. GRUB2 does not have an image file like stages 1 and 2. Instead, it consists of those files and runtime kernel modules that are loaded as needed from the /boot/grub2 directory and its subdirectories. Some Linux distributions may store these files in the /boot/grub directory.
The function of GRUB stage 2 is to locate and load a Linux kernel into RAM and turn control of the computer over to the kernel. The kernel and its associated files are located in the /boot directory. The kernel files are identifiable as they are all named starting with vmlinuz. You can list the contents of the /boot directory to see the currently installed kernels on your system.
Experiment 16-3
You can see that there are four kernels and their supporting files in this list. The System.map files are symbol tables that map the physical addresses of the symbols such as variables and functions. The initramfs files are used early in the Linux boot process before the filesystem drivers have been loaded and the filesystems mounted.
The default kernel is always the most recent one that has been installed during updates, and it will boot automatically after a short timeout of five seconds. If the up and down arrows are pressed, the countdown stops, and the highlight bar moves to another kernel. Press Enter to boot the selected kernel.
If almost any key other than the up and down arrow keys or the “e” or “c” keys are pressed, the countdown stops and waits for more input. Now you can take your time to use the arrow keys to select a kernel to boot and then press the Enter key to boot from it. Stage 2 of GRUB loads the selected kernel into memory and turns control of the computer over to the kernel.
The rescue boot option is intended as a last resort when attempting to resolve boot severe problems – ones which prevent the Linux system from completing the boot process. When some types of errors occur during boot, GRUB will automatically fall back to boot from the rescue image.
The GRUB menu entries for installed kernels has been useful to me. Before I became aware of VirtualBox I used to use some commercial virtualization software that sometimes experienced problems when the Linux was updated. Although the company tried to keep up with kernel variations, they eventually stopped updating their software to run with every kernel version. Whenever they did not support a kernel version to which I had updated, I used the GRUB menu to select an older kernel which I knew would work. I did discover that maintaining only three older kernels was not always enough, so I configured the DNF package manager to save up to ten kernels. DNF package manager configuration is covered in Volume 1, Chapter 12.
Configuring GRUB
GRUB is configured with /boot/grub2/grub.cfg, but we do not change that file because it can get overwritten when the kernel is updated to a new version. Instead, we make modifications to the /etc/default/grub file.
Experiment 16-4
Chapter 6 of the GRUB documentation referenced in footnote 6 contains a complete listing of all the possible entries in the /etc/default/grub file, but there are three that we should look at here.
I always change GRUB_TIMEOUT, the number of seconds for the GRUB menu countdown, from five to ten which gives a bit more time to respond to the GRUB menu before the countdown hits zero.
I also change GRUB_DISABLE_RECOVERY from “true” to “false” which is a bit of reverse programmer logic. I have found that the rescue boot option does not always work. To circumvent this problem, I change this statement to allow the grub2-mkconfig command to generate a recovery option for each installed kernel; I have found that when the rescue option fails, these options do work. This also provides recovery kernels for use in case a particular tool or software package that needs to run on a specific kernel version is able to do so.
Note Changing GRUB_DISABLE_RECOVERY in the grub default configuration no longer works starting in Fedora 30. The other changes, GRUB_TIMEOUT and removing “rhgb quiet” from the GRUB_CMDLINE_LINUX variable, still work.
The GRUB_CMDLINE_LINUX line can be changed, too. This line lists the command-line parameters that are passed to the kernel at boot time. I usually delete the last two parameters on this line. The rhgb parameter stands for Red Hat Graphical Boot, and it causes the little graphical animation of the Fedora icon to display during the kernel initialization instead of showing boot time messages. The quiet parameter prevents the display of the startup messages that document the progress of the startup and any errors that might occur. Delete both of these entries because SysAdmins need to be able to see these messages. If something goes wrong during boot, the messages displayed on the screen can point us to the cause of the problem.
Recheck the content of /boot/grub2/grub.cfg which should reflect the changes we made. You can grep for the specific lines we changed to verify that the changes occurred. We could also use an alternative form of this command to specify the output file. grub2-mkconfig -o /boot/grub2/grub.cfg Either form works, and the results are the same.
Use the down arrow key to highlight the recovery option for the default kernel – the second option – and press the Enter key to complete the boot and startup process. This will take you into recovery mode using that kernel. You will also notice many messages displayed on the screen as the system boots and goes through startup. Some of these messages can be seen in Figure 16-3 along with messages pertaining to the rescue shell.
Type the root password to log in. There are also instructions on the screen in case you want to reboot or continue into the default runlevel target.
Notice also at the bottom of the screen in Figure 16-3 that the little trail of messages we will embed in the bash startup configuration files in Chapter 17 shows here that the /etc/bashrc and /etc/profile.d/myBashConfig.sh files – along with all of the other bash configuration files in /etc/profile.d – were run at login. I have skipped ahead a bit with this, but I will show you how to test it yourself in Chapter 17. This is good information to have because you will know what to expect in the way of shell configuration while working in recovery mode.
Before completing this experiment, reboot your VM to one of the older regular kernels, and log in to the desktop. Test a few programs, and then open a terminal session to test some command-line utilities. Everything should work without a problem because the kernel version is not bound to specific versions of the rest of the Linux operating system. Running an alternate kernel is easy and commonplace.
To end this experiment, reboot the system and allow the default kernel to boot. No intervention will be required. Youx will see all of the kernel boot and startup messages during this normal boot.
There are three different terms that are typically applied to recovery mode: recovery, rescue, and maintenance. These are all functionally the same. Maintenance mode is typically used when the Linux host fails to boot to its default target due to some error that occurs during the boot and startup. Being able to see the boot and startup messages if an error occurs can also provide clues as to where the problem might exist.
I have found that the rescue kernel, the option at the bottom of the GRUB menu in Figures 16-1, 16-2, and 16-3, almost never works and I have tried it on a variety of physical hardware and virtual machines, and it always fails. So I need to use the recovery kernels, and that is why I configure GRUB to create those recovery menu options.
In Figure 16-2, after configuring GRUB and running the grub2-mkconfig -o /boot/grub2/grub.cfg command, there are two rescue mode menu options. In my testing I have discovered that the top rescue mode menu option fails but that the bottom rescue mode menu option, the one we just created, does work. But it really does not seem to matter because, as I have said, both rescue and recovery modes provide exactly the same function. This problem is a bug, probably in GRUB, so I reported it to Red Hat using Bugzilla.9
Part of our responsibility as SysAdmins, and part of giving back to the open source community, is to report bugs when we encounter them. Anyone can create an account and log in to report bugs. Updates will be sent to you by e-mail whenever a change is made to the bug report.
The Linux kernel
All Linux kernels are in a self-extracting, compressed format to save space. The kernels are located in the /boot directory, along with an initial RAM disk image and symbol maps. After the selected kernel is loaded into memory by GRUB and begins executing, it must first extract itself from the compressed version of the file before it can perform any useful work. The kernel has extracted itself, loads systemd, and turns control over to it.
This is the end of the boot process. At this point, the Linux kernel and systemd are running but unable to perform any productive tasks for the end user because nothing else is running, no shell to provide a command line, no background processes to manage the network or other communication links, and nothing that enables the computer to perform any productive function.
Linux startup
The startup process follows the boot process and brings the Linux computer up to an operational state in which it is usable for productive work. The startup process begins when the kernel transfers control of the host to systemd.
systemd
systemd10,11 is the mother of all processes, and it is responsible for bringing the Linux host up to a state in which productive work can be done. Some of its functions, which are far more extensive than the old SystemV12 init program, are to manage many aspects of a running Linux host, including mounting filesystems and starting and managing system services required to have a productive Linux host. Any of systemd’s tasks that are not related to the startup sequence are outside the scope of this chapter, but we will explore them in Volume 2, Chapter 13.
First systemd mounts the filesystems as defined by /etc/fstab, including any swap files or partitions. At this point, it can access the configuration files located in /etc, including its own. It uses its configuration link, /etc/systemd/system/default.target, to determine which state or target, into which it should boot the host. The default.target file is a symbolic link to the true target file. For a desktop workstation, this is typically going to be the graphical.target, which is equivalent to runlevel 5 in SystemV. For a server, the default is more likely to be the multi-user.target which is like runlevel 3 in SystemV. The emergency.target is similar to single user mode. Targets and services are systemd units.
Each target has a set of dependencies described in its configuration file. systemd starts the required dependencies. These dependencies are the services required to run the Linux host at a specific level of functionality. When all of the dependencies listed in the target configuration files are loaded and running, the system is running at that target level.
systemd also looks at the legacy SystemV init directories to see if any startup files exist there. If so, systemd used those as configuration files to start the services described by the files. The deprecated network service is a good example of one of those that still use SystemV startup files in Fedora.
Figure 16-5 is copied directly from the bootup man page. It shows a map of the general sequence of events during systemd startup and the basic ordering requirements to ensure a successful startup.
The sysinit.target and basic.target targets can be considered as checkpoints in the startup process. Although systemd has as one of its design goals to start system services in parallel, there are still certain services and functional targets that must be started before other services and targets can be started. These checkpoints cannot be passed until all of the services and targets required by that checkpoint are fulfilled.
The sysinit.target is reached when all of the units on which it depends are completed. All of those units, mounting filesystems, setting up swap files, starting udev, setting the random generator seed, initiating low-level services, and setting up cryptographic services if one or more filesystems are encrypted, must be completed, but within the sysinit.target, those tasks can be performed in parallel.
The sysinit.target starts up all of the low-level services and units required for the system to be marginally functional, and that are required to enable moving on to the basic.target.
After the sysinit.target is fulfilled, systemd next starts the basic.target, starting all of the units required to fulfill it. The basic target provides some additional functionality by starting units that are required for all of the next targets. These include setting up things like paths to various executable directories, communication sockets, and timers.
The bootup man page also describes and provides maps of the boot into the initial RAM disk and the systemd shutdown process.
Experiment 16-5
So far we have only booted to the graphical.target, so let’s change the default target to multi-user.target to boot into a console interface rather than a GUI interface.
I have shortened this listing to highlight a few important things that will help us understand how systemd manages the boot process. You should be able to see the entire list of directories and links on your VM.
The default.target has different requirements in the [Unit] section. It does not require the graphical display manager.
I am unsure why the term “isolate” was chosen for this subcommand by the developers of systemd. However the effect is to switch targets from one run target to another, in this case from the emergency target to the graphical target. The preceding command is equivalent to the old init 5 command in the days of SystemV start scripts and the init program.
Log in to the GUI desktop.
We will explore systemd in more detail in Chapter 13 of Volume 2.
GRUB and the systemd init system are key components in the boot and startup phases of most modern Linux distributions. These two components work together smoothly to first load the kernel and then to start up all of the system services required to produce a functional GNU/Linux system.
Although I do find both GRUB and systemd more complex than their predecessors, they are also just as easy to learn and manage. The man pages have a great deal of information about systemd, and freedesktop.org has a web site that describes the complete startup process14 and a complete set of systemd man pages15 online.
Graphical login screen
There are still two components that figure in to the very end of the boot and startup process for the graphical.target, the display manager (dm) and the window manager (wm). These two programs, regardless of which ones you use on your Linux GUI desktop system, always work closely together to make your GUI login experience smooth and seamless before you even get to your desktop.
Display manager
The display manager16 is a program with the sole function of providing the GUI login screen for your Linux desktop. After you log in to a GUI desktop, the display manager turns control over to the window manager. When you log out of the desktop, the display manager is given control again to display the login screen and wait for another login.
There are several display managers; some are provided with their respective desktops. For example, the kdm display manager is provided with the KDE desktop. Many display managers are not directly associated with a specific desktop. Any of the display managers can be used for your login screen regardless of which desktop you are using. And not all desktops have their own display managers. Such is the flexibility of Linux and well-written, modular code.
Regardless of which display manager is configured as the default at installation time, later installation of additional desktops does not automatically change the display manager used. If you want to change the display manager, you must do it yourself from the command line. Any display manager can be used, regardless of which window manager and desktop are used.
Window manager
The function of a window manager18 is to manage the creation, movement, and destruction of windows on a GUI desktop including the GUI login screen. The window manager works with the Xwindow19 system or the newer Wayland20 to perform these tasks. The Xwindow system provides all of the graphical primitives and functions to generate the graphics for a Linux or Unix graphical user interface.
The window manager also controls the appearance of the windows it generates. This includes the functional decorative aspects of the windows, such as the look of buttons, sliders, window frames, pop-up menus, and more.
Most window managers are not directly associated with any specific desktop. In fact some window managers can be used without any type of desktop software, such as KDE or GNOME, to provide a very minimalist GUI experience for users. Many desktop environments support the use of more than one window manager.
How do I deal with all these choices?
In most modern distributions, the choices are made for you at installation time and are based on your selection of desktops and the preferences of the packagers of your distribution. The desktop and window managers and the display manager can be easily changed.
Now that systemd has become the standard startup system in many distributions, you can set the preferred display manager in /etc/systemd/system which is where the basic system startup configuration is located. There is a symbolic link (symlink) named display-manager.service that points to one of the display manager service units in /usr/lib/systemd/system. Each installed display manager has a service unit located there. To change the active display manager, remove the existing display-manager.service link, and replace it with the one you want to use.
Experiment 16-6
Perform this experiment as root. We will install additional display managers and stand-alone window managers then switch between them.
Explore this window manager. Open an Xterm instance, and locate the menu option that gives access to application programs. Figure 16-9 shows the Fvwm desktop (this is not a desktop environment like KDE or GNOME) with an open Xterm instance and a menu tree that is opened with a left click on the display. A different menu is opened with a right-click.
Fvwm is a very basic but usable window manager. Like most window managers, it provides menus to access various functions and a graphical display that supports simple windowing functionality. Fvwm also provides multiple windows in which to run programs for some task management capabilities.
After spending a bit of time exploring the Fvwm interface, log out. Can’t find the way to do that? Neither could I as it is very nonintuitive. Left-click the desktop and open the FvwmConsole. Then type in the command Quit – yes, with the uppercase Q – and press Enter.
Try each of the other window managers, exploring the basic functions of launching applications and a terminal session. When you have finished that, exit whichever window manager you are in, and log in again using the Xfce desktop environment.
As far as I can tell at this point, rebooting the host is the only way to reliably activate the new dm. Go ahead and reboot your VM now to do that.
If the second two steps in this sequence does not work, then reboot. Jason Baker, my technical reviewer, says, “This seemed to work for me, but then it failed to actually log in to lightdm, so I had to reboot.”
Different distributions and desktops have various means of changing the window manager, but, in general, changing the desktop environment also changes the window manager to the default one for that desktop. For current releases of Fedora Linux, the desktop environment can be changed on the display manager login screen. If stand-alone display managers are also installed, they also appear in the list with the desktop environments.
There are many different choices for display and window managers available. When you install most modern distributions with any kind of desktop, the choices of which ones to install and activate are usually made by the installation program. For most users, there should never be any need to change these choices. For others who have different needs, or for those who are simply more adventurous, there are many options and combinations from which to choose. With a little research, you can make some interesting changes.
About the login
After a Linux host is turned on, it boots and goes through the startup process. When the startup process is completed, we are presented with a graphical or command-line login screen. Without a login prompt, it is impossible to log in to a Linux host.
How the login prompt is displayed and how a new one is displayed after a user logs out are the final stage of understanding the Linux startup.
CLI login screen
The CLI login screen is initiated by a program called a getty, which stands for GET TTY. The historical function of a getty was to wait for a connection from a remote dumb terminal to come in on a serial communications line. The getty program would spawn the login screen and wait for a login to occur. When the remote user would log in, the getty would terminate, and the default shell for the user account would launch and allow the user to interact with the host on the command line. When the user would log out, the init program would spawn a new getty to listen for the next connection.
- 1.
systemd starts the systemd-getty-generator daemon.
- 2.
The systemd-getty-generator spawns an agetty on each of the virtual consoles using the [email protected].
- 3.
The agettys wait for virtual console connection, which is the user switching to one of the VCs.
- 4.
The agetty presents the text mode login screen on the display.
- 5.
The user logs in.
- 6.
The shell specified in /etc/passwd is started.
- 7.
Shell configuration scripts run.
- 8.
The user works in the shell session.
- 9.
The user logs off.
- 10.
The systemd-getty-generator spawns an agetty on the logged out virtual console.
- 11.
Go to step 3.
Starting with step 3, this is a circular process that repeats as long as the host is up and running. New login screens are displayed on a virtual console immediately after the user logs out of the old session.
GUI login screen
- 1.
The specified display manager (dm) is launched by systemd at the end of the startup sequence.
- 2.
The display manager displays graphical login screen, usually on virtual console 1.
- 3.
The dm waits for a login.
- 4.
The user logs in.
- 5.
The specified window manager is started.
- 6.
The specified desktop GUI, if any, is started.
- 7.
The user performs work in the window manager/desktop.
- 8.
The user logs out.
- 9.
systemd respawns the display manager.
- 10.
Go to step 2.
The steps are almost the same, and the display manager functions as a graphical version of the agetty.
Chapter summary
We have explored the Linux boot and startup processes in some detail. This chapter explored reconfiguration of the GRUB bootloader to display the kernel boot and startup messages as well as to create recovery mode entries, ones that actually work, for the GRUB menu. Because there is a bug when attempting to boot to the rescue mode kernel, we discussed our responsibility as SysAdmins to report bugs through the appropriate channels.
We installed and explored some different window managers as an alternative to more complex desktop environments. The desktop environments do depend upon at least one of the window managers for their low-level graphical functions while providing useful, needed, and sometimes fun features. We also discovered how to change the default display manager to provide a different GUI login screen as well as how the GUI and command-line logins work.
This chapter has also been about learning the tools like dd that we used to extract the data from files and from specific locations on the hard drive. Understanding those tools and how they can be used to locate and trace data and files provides SysAdmins with skills that can be applied to exploring other aspects of Linux.
Exercises
- 1.
Describe the Linux boot process.
- 2.
Describe the Linux startup process.
- 3.
What does GRUB do?
- 4.
Where is stage 1 of GRUB located on the hard drive?
- 5.
What is the function of systemd during startup?
- 6.
Where are the systemd startup target files and links located?
- 7.
Configure the StudentVM1 host so that the default.target is reboot.target and reboot the system.
After watching the VM reboot a couple times, reconfigure the default.target to point to the graphical.target again and reboot.
- 8.
What is the function of an agetty?
- 9.
Describe the function of a display manager.
- 10.
What Linux component attaches to a virtual console and displays the text mode login screen?
- 11.
List and describe the Linux components involved and the sequence of events that take place when a user logs in to a virtual console until they log out.
- 12.
What happens when the display manager service is restarted from a root terminal session on the desktop using the command systemctl restart display-manager.service?