This chapter focuses on the essential steps for preparing the environment to start automating using Salt. We will first present some of the most important Salt-specific keywords and their meaning. Following that, we’ll take a look at the main configuration files used to control the behavior of Salt’s processes. Finally, we’ll review the processes startup, which implies the completion of the environment setup.
Salt comes with a particular nomenclature that requires a careful review of the documentation to fully understand. In general the documentation is very good and complete, very often providing usage examples, however much of it is written for an audience that already knows Salt basics and only needs to know how a particular module or interface is configured or called.
Pillar is free-form data that can be used to organize configuration values or manage sensitive data. It is an entity of data that can be either stored locally using the filesystem, or using external systems such as databases, Vault, Amazon S3, Git, and many other resources (see “Using External Pillar”). Simple examples of pillar data include a list of NTP peers, interface details, and BGP configuration.
When defined as SLS files they follow the methodologies described under “Extensible and Scalable Configuration Files: SLS”, and the data type is therefore a Jinja/YAML combination, by default, but different formats can be used if desired. For very complex use cases, we even have the option of using a pure Python renderer.
In order to avoid rewriting the same content
repeatedly when working with SLS files, three keywords can be used: include
, exclude
, and extend
. Here we will cover only the include
statement. The other two work in a similar way and they can be further explored by consulting the SaltStack documentation.
The content of a different SLS file can be included by specifying the name, without the .sls extension. For example, if we organize ntp_config.sls, router1.sls, and router2.sls all in the same directory then we can include the contents of npt_config.sls into the router1.sls and router2.sls pillar with the syntax shown in Example 2-1.
include
:
-
ntp_config
Note that include
accepts a list, so we are able to include the content from multiple SLS files.
The inclusion can also be relative using the common .
(dot) and ..
syntax seen in navigating through a directory hierarchy.
To set up a NAPALM proxy, the pillar should contain the following information:
driver
The name of the NAPALM driver
host
FQDN or IP address to use when connecting to the device (alternatively, this value can be specified using the fqdn
, ip
, or hostname
fields)
username
Username to be used when connecting to the device
password
Password required to establish the connection (if the driver permits, the authentication is established using a SSH key and this field can be blank)
Additionally, we can also specify other parameters such as port
, enable_password
, and so on using the optional_args
field. Refer to the NAPALM documentation for the complete list of optional arguments.
In Examples 2-2 and 2-3, we provide two sample pillar files to manage different operating systems, Junos and EOS. The same format can be used to manage Cisco IOS devices. Note the usage of the proxytype
field, which must be configured as napalm
to tell Salt that the NAPALM proxy modules are going to be used.
proxy
:
proxytype
:
napalm
driver
:
junos
fqdn
:
r1.bbone.as1234.net
username
:
napalm
password
:
Napalm123
proxy
:
proxytype
:
napalm
driver
:
eos
fqdn
:
sw1.bbone.as1234.net
username
:
napalm
password
:
Napalm123
If you are authenticating using SSH keys, and if the driver supports key-based authentication, the password
field is not mandatory or can be empty (i.e., password: ''
).
proxy
:
proxytype
:
napalm
driver
:
ios
fqdn
:
r2.bbone.as1234.net
username
:
napalm
optional_args
:
secret
:
s3kr3t
Grains represent static data collected from the device. The user does not need to do anything but to be aware this information already exists and it is available. Grains are typically handy for targeting minions and for complex and cross-vendor templating, but not limited to these uses. They are directly available when working with the CLI and also inside templates.
Grains must be understood as purely static data or information very unlikely to change, or at least data that does not change often.
When a device is managed through NAPALM the following grains are collected:
Grain name | Grain description | Example |
---|---|---|
|
Name of the vendor |
|
|
Chassis physical model |
|
|
Chassis serial number |
|
|
The operating system name |
|
|
The operating system version |
|
|
The uptime in seconds |
|
|
Host (FQDN) of the device |
|
|
List of interfaces |
|
|
The username used for connection |
|
You also have the option of configuring grains statically inside the proxy minion configuration file. This is a good idea when you need to configure device-specific data that can be used to uniquely identify a device or a class of devices (e.g., role
can be set as spine
, leaf
, etc.).
Very often, we will need additional grains to be collected dynamically from the device to determine and cache particular characteristics. Writing grain modules is beyond the scope of this book, but is discussed in the documentation.
The Salt system is very flexible and easy to configure. For network
automation our components are the salt-master
, configured via the master configuration file (typically /etc/salt/master or /srv/master); and the salt-proxy
, configured via the proxy configuration file (in general /etc/salt/proxy, /srv/proxy or C:saltconfproxy, depending on the platform). Their location depends on the environment and the operating system. They are structured as YAML files, usually simple key–value pairs, where the key is the option name. See the documentation for the complete list of options.
For our network automation needs there are not any particular options to be configured, but file_roots
and pillar_roots
are very important to understand.
Salt runs a lightweight file server that uses the existing, encrypted transport to deliver files to minions. Under the file_roots
option one can structure the environment beautifully, having a hierarchy that is also easy to understand. In addition, it allows running different environments on the same server without any overlap between them (see Example 2-5).
file_roots
:
base
:
-
/etc/salt/
-
/etc/salt/states
sandbox
:
-
/home/mircea/
-
/home/mircea/states
In Example 2-5, we have two environments, base
and sandbox
, which are custom environment names. When executing various commands the user can specify what is the targeted environment (defaulting to base
when not explicitly specified).
The pillar_roots
option sets the environments and the directories used to hold the pillar SLS data. Its structure is very similar to file_roots
, as you can see in Example 2-6.
pillar_roots
:
base
:
-
/etc/salt/pillar
sandbox
:
-
/home/mircea/pillar
Note that under each environment we are able to specify multiple directories, thus we can define the SLS pillar under more than one single directory.
As mentioned in “Pillar”, the pillar data can also be loaded from external services as described in the documentation. There are plenty of already integrated services that can be used straightaway.
For security reasons a very common practice is using the HashiCorp Vault to store sensitive information. Setup is a breeze—it only requires a couple of lines to be appended to the master configuration (see Example 2-7).
ext_pillar
:
-
vault
:
vault.host
:
127.0.0.1
vault.port
:
8200
# The scheme is optional, and defaults to https
vault.scheme
:
http
# The token is required, unless set in environment
vault.token
:
012356789abcdef
The data retrieved from the Vault can be used inside other pillar SLS files as well, but it requires the ext_pillar_first
option to be set as true
in the master configuration.
As the proxy minion is a subset of the regular minion, it inherits the same configuration options, as discussed in the minion configuration documentation. In addition, there are few other more specific values discussed in the proxy minion documentation.
A notable option required for some NAPALM drivers to work properly is
multiprocessing
set as false
, which prevents Salt from starting a sub-process per command; it instead starts a new thread and the command is executed therein (see Example 2-8). This is necessary for SSH-based proxies, as the initialization happens in a different process; after forking the child instance it actually talks to a duplicated file descriptor pointing to a socket, whereas the parent process is still alive and might even be doing side-effecting background tasks. If the parent is not suspended, you could end up with two processes reading and writing to the same socket file descriptors. This is why the socket needs to be handled in the same process, each task being served in a separate thread. It is essential as some network devices are managed through SSH-based channels (e.g., Junos, Cisco IOS, Cisco IOS-XR, etc.). However, it can be re-enabled for devices using HTTP-based APIs (e.g., Arista or Cisco Nexus).
master
:
localhost
pki_dir
:
/etc/salt/pki/proxy
cachedir
:
/var/cache/salt/proxy
multiprocessing
:
False
Each device has an associated unique identifier, called the minion ID. The top file creates the mapping between the minion ID and the corresponding SLS pillar file(s).
As we can have multiple environments defined under the file_roots
master option, the Top File is also flexible enough to create different bindings between the minion ID and the SLS file, depending on the environment.
The top file is another SLS file named top.sls found under one of the paths defined in the file_roots
.
In this book, we will use a simple top file structure, having a one-to-one mapping between the minion ID and the associated pillar SLS (see Example 2-9).
base
:
'device1'
:
# minion ID
-
device1_pillar
# minion ID 'device1' loads 'device1_pillar.sls'
'device2'
:
# minion ID
-
device2_pillar
# minion id 'device2' loads 'device2_pillar.sls'
'device3'
:
# minion ID
-
device3_pillar
# minion id 'device3' loads 'device3_pillar.sls'
In this file, where device1_pillar
, device2_pillar
, and device3_pillar
represent the name of the SLS pillar files defined in Examples 2-2, 2-3, and 2-4.
When referencing a SLS file in the top file mapping, do not include the .sls extension.
The mapping can be more complex and we can select multiple minions that use a certain pillar SLS. Targeting minions can be based on the grains, regular expressions matched on the minion ID, a list of IDs, static defined groups, or even external pillar data. We will expand on this momentarily, but meanwhile, Example 2-10 illustrates how an SLS file called bbone_rtr.sls can be used by a group of minions whose IDs begin with router.
# Apply SLS files from the directory root
# for the 'base' environment
base
:
# All minions with a minion ID that
# begins with 'router'
'router*'
:
# Apply the SLS file named 'bbone_rtr.sls'
-
bbone_rtr
Remember that the top file leverages the SLS principles, therefore the mapping from Example 2-9 can also be dynamically created as:
base
:
{
%
for index in range(1
,
4) -%
}
'device{{
index
}}'
:
-
device{{ index }}_pillar
{
%
endfor -%
}
The top file can equally be defined using the Python renderer, as complex and dynamic as the business logic requires.
Long-running daemons on both the Salt master and Salt minion or proxy minion will maintain the always-on and high-speed communication channel. There are ways to make use of Salt without running these daemons, but those have specific use cases and specific limitations. This section will focus on the common configuration using daemons.
For the scope of this book, file_roots
and pillar_roots
are enough to start the master process. The only requirement is that the file respects a valid YAML structure.
The master process can be started in daemon mode by running salt-master -d
. On a Linux system the process can be controlled using systemd
, as the Salt packages also contain the service configuration file:
systemctl start salt-master
. On BSD systems, there are also startup scripts available.
While the master requires one single service to be started, in order to control network gear we need to start one separate process per device, each consuming approximately 60 MB RAM.
A very good practice to start with and check for misconfiguration is to run the proxy in debug mode: salt-proxy --proxyid <MINION ID> -l debug
. When everything looks good we are able to start the process as daemon: salt-proxy --proxyid <MINION ID> -d
or use systemd
: systemctl start salt-proxy@<MINION ID>
.
Using systemd
, one is able to start multiple devices at a time using shell globbing (e.g., systemctl start salt-proxy@device*
). Additionally, it presents the advantage that systemd
manages the process startup and restart.
In case we want to avoid using systemd
, Salt facilitates the management of the proxy processes using a beacon, and we have illustrated this particular use case in Chapter 7.
For example, with the environment setup exemplified in the previous paragraphs we can start the process as: salt-proxy --proxyid device1 -d
and salt-proxy --proxyid device2 -d
or systemct start salt-proxy@device1
and systemct start salt-proxy@device2
.
Each proxy process manages one individual network device, so for large-scale networks this leads to managing thousands of proxy processes. This is easier to manage when controlling the server that is running the proxy processes using the regular Salt minion—the orchestration of the proxies is effectively reduced to just managing a few configuration files!
When a new minion is started, it generates a private and public key pair. For security reasons the minion public key is not accepted automatically by the master. The auto_accept
configuration option on the master can be turned on, but this is highly discouraged in production environments. The CLI command salt-key -L
will display the list of accepted and unaccepted minions.
For this reason, we need to accept the minion key (see Example 2-11).
$
sudo salt-key -a device1 The following keys are going to be accepted: Unaccepted Keys: device1 Proceed?[
n/Y]
y Keyfor
minion device1 accepted
The salt-key
program can also accept shell globbing to accept multiple minions at once. For example:
salt-key -a device* -Y
There are also good ways to automate acceptance of minion keys using the Salt reactor or the REST API.
In this chapter we have presented several of the most important Salt-specific keywords—they all are very important and constitute the foundation moving forward. Very briefly, we have also covered the processes startup. Now that you have the environment set up, go ahead and start automating with Salt.
3.12.108.18