The state subsystem facilitates the management of a device to keep it in a predetermined state. On the server side it is used to install packages, start or restart services, and configure files or other data entities. The same methodologies can be applied on whitebox devices that allow custom software installation, otherwise the state system is an excellent way to manage the configuration of traditional network gear.
We will rely heavily on the advanced templating methodologies covered in Chapter 4, so we can use the pillar
, salt
, grains
, or opts
reserved keywords presented earlier to access data from the corresponding entities. In other words, we can access the data from databases, Excel files, Git, or REST APIs directly and the state does not rely on the mechanism used to retrieve the data—Salt provides a clear separation of the automation logic and data.
The Salt states are SLS files containing information about how to configure the devices managed. Their structure is very simple, based on key-value pairs and lists. As discussed in Chapter 1, SLS is by default YAML and Jinja, very easy and flexible to design. It preserves the SLS capabilities so that you can switch to a different combination of data representation plus template language, or even pure Python, when required. Inside the state SLS we invoke state functions defined in the state modules.
As any Salt subsystem, the state has its own top file, which defines the mapping between the groups of minions that can execute a certain state (Example 5-1).
base
:
'*'
:
-
ntp
-
users
-
lldp
'router*
or
G@role:router'
:
-
bgp
-
mpls
'sw*
or
G@role:switch'
:
-
stp
In Example 5-1, any minion can execute the ntp.sls
, users.sls
, and lldp.sls
states, while bgp.sls
and mpls.sls
can only be executed by the minion ID that starts with router or having the role
grain configured as router; stp.sls
can only be executed by the switches, identified using the minion ID or their role
grain. Note that the role
grain is not globally available; it must be defined by the user according to the business requirements.
NetConfig is the most flexible state module for network automation. It does not have any particular dependency, but it requires the user to write their own templates. The cross-vendor templating methodologies presented in Chapter 4 remain the same, with the advantage that the separation between data and instructions becomes obvious. We’ll now analyze a few simple examples serving real-world needs.
Let’s automate the NTP configuration of a large-scale network, to ensure that only the servers 172.17.17.1
and 172.17.17.2
are used for synchronization. The network has devices produced by Cisco, Juniper, and Arista.
The first step is placing the list of NTP servers in the pillar. In this
example, the pillar_roots
option on the master is set as /etc/salt/pillar
. The NTP servers are defined as a list in an SLS file
called ntp_servers.sls, shown in Example 5-2.
ntp.servers
:
-
172.17.17.1
-
172.17.17.2
Using the include
, exclude
, and extend
directives, we can include the structure shown in Example 5-2 for each device very granularly, or we can simply include it for all devices using the top file matching strategies, as shown in Example 5-3.
base
:
'*'
:
-
ntp_servers
'device1'
:
-
device1_pillar
'device2'
:
-
device2_pillar
'device3'
:
-
device3_pillar
device1
, device2
, and device3
, as well as the corresponding pillar SLS, were defined in “Configuring the NAPALM Pillar”.
The '*'
tells Salt to provide the content from ntp_servers.sls to all minions.
The next step is refreshing the pillar data, executing salt 'device*' saltutil.refresh_pillar
.
We can use the pillar.get
execution function to check that the data has been loaded correctly (Example 5-4).
$
sudo salt'device*'
pillar.get ntp.servers device1: - 172.17.17.1 - 172.17.17.2 device2: - 172.17.17.1 - 172.17.17.2 device3: - 172.17.17.1 - 172.17.17.2
Now, as the data is available on all devices without much effort, we can define the template (Example 5-5).
{%
-if
grains.vendor
|
lower
in
[
'cisco'
,
'arista'
]
%}
no ntp
{%
-for
server
in
servers
%}
ntp server
{{
server
}}
{%
-endfor
%}
{%
-elif
grains.os
|
lower
==
'junos'
%}
system {
replace:
ntp {
{%
-for
server
in
servers
%}
server
{{
server
}}
;
{%
-endfor
%}
}
}
{%
-endif
%}
The template checks the vendor
and os
grains and generates the
configuration for the NTP servers depending on the platform. Cisco and Arista are grouped together as the syntax for the NTP configuration is very similar.
The variable servers
will be sent from the state SLS, but it could equally be directly accessed as pillar['ntp.servers']
or salt.pillar.get('ntp.peers')
. For flexibility reasons, it is preferred to send the data from the state SLS.
The template is defined under /etc/salt/states/ntp/templates/ntp.jinja:
/etc/salt/states
is the path to the Salt file server for the states. As defined under file_roots
, ntp
is the name of the state, which can be a hierarchical directory where the templates are defined in a dedicated directory. This is a good practice to remember when defining complex states.
In the netconfig
state, the configuration enforcement behavior requires the user to explicitly use the configuration replace capabilities of the network operating system.
If the device does not have replace capabilities, the workaround is a supplementary execution function that retrieves the current state of the feature that can be executed inside the template using the salt
directive and determine what needs to be removed and added. Although this requires one additional step and a slightly more complex template, this is a unique feature of Salt, permitting the configuration management for network devices having this drawback.
The last step is defining the SLS file under file_roots
—we will use the /etc/salt/states
path. A good practice is grouping the state SLS into directories depending on their role (Example 5-6).
ntp_servers_recipe
:
netconfig.managed
:
-
template_name
:
salt://ntp/templates/ntp.jinja
-
servers
:
{{
salt.pillar.get('ntp.servers') | json
}}
ntp_servers_recipe
is a name assigned to the state and it tells to execute the managed
function from the netconfig
state module, using the template ntp.jinja
defined under the Salt file system and passing the variable servers
that takes its value from the pillar key ntp.servers
.
The state SLS is defined as /etc/salt/states/ntp/init.sls
: ntp
is the name of the state, while init.sls
is a reserved name that allows the execution simply by specifying the name of the directory—that is, ntp
. If we would define the state SLS under /etc/salt/states/ntp/example.sls
, the state would be executed using: ntp.example
.
Note the json
Jinja filter from Example 5-6. This is not mandatory, but almost always a very good practice when passing objects; otherwise, values will be interpreted by the YAML parser, which has some surprising type-casting behaviors.
There are a few handy fields that can be specified in the SLS:
debug
Return also the result of template rendering. The state returns the configuration difference, but that’s not necessarily equivalent to the changes loaded.
template_engine
When the user prefers a template engine other than Jinja.
replace
Replace the entire configuration with the generated contents.
The state.sls
execution function invokes the ntp
state, defined under /etc/salt/states/ntp/init.sls, as shown in Example 5-7.
$
sudo salt'device2'
state.sls ntp device2: ---------- ID: ntp_servers_recipe Function: netconfig.managed Result: True Comment: Configuration changed! Started: 13:15:07.608236 Duration: 8954.756 ms Changes: ---------- diff: @@ -55,6 +55,7 @@ ! ntpsource
Loopback0 -ntp server 1.2.3.4 -ntp server 5.6.7.8 +ntp server 172.17.17.1 ntp serve all ! Summaryfor
device2 ------------ Succeeded:1
(
changed
=
1)
Failed: 0 ------------ Total states run: 1 Total runtime
: 8.955 s
The code shown in Example 5-8 changes the SLS to use the debug
field.
ntp_servers_recipe
:
netconfig.managed
:
-
template_name
:
salt://ntp/templates/ntp.jinja
-
servers
:
{{
salt.pillar.get('ntp.servers') | json
}}
-
debug
:
true
We can execute and the state will also provide the configuration rendered and loaded on the device (see Example 5-9).
$
sudo salt'device1'
state.sls ntptest
=
True device1: ---------- ID: ntp_servers_recipe Function: netconfig.managed Result: None Comment: Configuration discarded. Configuration diff:[
edit system ntp]
- peer 1.2.3.4;
[
edit system ntp]
+ server 172.17.17.1;
+ server 172.17.17.2;
Loaded config: system{
replace: ntp{
server 172.17.17.1;
server 172.17.17.2;
}
}
Started: 13:07:09.983598 Duration: 8566.857 ms Changes: Summaryfor
device1 ------------ Succeeded:1
(
unchanged
=
1)
Failed: 0 ------------ Total states run: 1 Total runtime
: 8.567 s
This state is a very good choice for production environments because it’s easy to check the correctness of the template and if the changes are indeed as expected. The data is clearly decoupled and changes are now applied according to the Pillar, whose structure is vendor-agnostic and human-readable. In our recipe, to update the list of NTP servers in a large-scale network only becomes as simple as updating the /etc/salt/pillar/ntp_servers.sls file, followed by the execution of the state.
We don’t have any specific constraint so we can structure the Pillar data at our will (see Example 5-10).
openconfig-interfaces
:
interfaces
:
interface
:
xe-0/0/0
:
config
:
mtu
:
9000
description
:
Interface1
subinterfaces
:
subinterface
:
0
:
config
:
description
:
Subinterface1
ipv4
:
addresses
:
address
:
1.2.3.4
:
config
:
ip
:
1.2.3.4
prefix_length
:
24
Based on the model exemplified earlier, we can start building the skeleton for the interfaces template (Example 5-11).
{%
-if
grains.os
|
lower
==
'junos'
%}
replace:
interfaces {
{%
-for
if_name
,
if_details
in
interfaces.interface.items
()
%}
{{
if_name
}}
{
mtu
{{
if_details.config.mtu
}}
;
description
{{
if_details.config.description
}}
;
{%
-set
subif
=
if_details.subinterfaces.subinterface
%}
{%
-for
subif_id
,
subif_details
in
subif.items
()
%}
unit
{{
subif_id
}}
{
description "
{{
subif_details.config.description
}}
";
{%
-if
subif_details.ipv4
%}
family inet {
{%
-set
subif_addrs
=
subif_details.ipv4.addresses.address
%}
{%
-for
_
,
addr
in
subif_addrs.items
()
%}
address
{{
addr.config.ip
}}
/
{{
addr.config.prefix_length
}}
;
{%
-endfor
%}
}
{%
-endif
%}
}
{%
-endfor
%}
}
{%
-endfor
%}
}
{%
-endif
%}
The state SLS is defined in a similar way (Example 5-12).
interfaces_recipe
:
netconfig.managed
:
-
template_name
:
salt://interfaces/templates/init.jinja
-
{{
salt.pillar.get('openconfig-interfaces') | json
}}
And we can simply execute the state (Example 5-13).
$
sudo salt'device1'
state.sls interfaces device1: ---------- ID: interfaces_recipe Function: netconfig.managed Result: True Comment: Configuration changed! Started: 16:49:45.827128 Duration: 7973.572 ms Changes: ---------- diff:[
edit]
+ interfaces{
+ xe-0/0/0{
+ description Interface1;
+ mtu 9000;
+ unit0
{
+ description Subinterface1;
+ family inet{
+ address 1.2.3.4/24;
+}
+}
+}
+}
Summaryfor
device1 ------------ Succeeded: 1 Failed: 0 ------------ Total states run: 1 Total runtime
: 7.974 s
While the NetConfig state is very flexible, it requires you to define an environment-specific template, and implicitly to decide the structure of the pillar. Also, finding a common representation of the structure of the pillar that makes sense and covers several platforms turns out to be difficult sometimes. The structure from Example 5-10 may look overly complicated when generating the configuration, but as we’ll soon see, this is a good pattern in order to cover the complexity and differences between various vendors. Fortunately, structures as in Example 5-10 have been standardized being modeled using YANG. YANG (Yet Another Next Generation) has been introduced in RFC6020 in October 2010 and is a data modeling language. Several organizations such as IETF have concentrated the efforts to standardize the modelation of structured entities of information for networking applications. Network vendors also focused on writing YANG models, but unfortunately this creates divergence when working in multivendor environments.
We will not focus on this topic here, as our main goal is an unified framework. A very important standardization group is OpenConfig, which has provided a significant number of YANG models already. It consists exclusively of network operators whose goal is providing vendor-neutral representation of configuration and operational data based on production requirements.
Do not conflate YANG with a transport protocol, nor a data representation language. YANG is a modelation language that defines the structure of the documents, regardless of their data representation language. These documents can be JSON, XML, or YAML, having the hierarchy tree according to the YANG model.
In Salt we have leveraged the modelation capabilities of YANG in such a way that the user is not required to write the templates, but only to structure the pillars following the YANG models.
As the efforts toward programmable infrastructure are still at the very
beginning so too are the tools involved: the only limitation of the NetYANG Salt state is the capabilities of its dependency, napalm-yang. One of the latest libraries of the NAPALM suite, napalm-yang
is a community effort to translate vendor-specific data representation into structured documents, as per the YANG models. There are models already well covered, but there are many others waiting to be implemented. Without going into further details, we want to emphasize that, although writing templates in their own environment might be very tempting and feel more straightforward, a public contribution scales much better in the long term and it gives a lot of help back to the community.
A good starting point to visualize the hierarchy of the OpenConfig models can be found on the OpenConfig site.
Navigating through the YANG model using the tool referenced above, in particular openconfig-lldp.html, and looking for the config
containers, we can define the YAML structure shown in Example 5-14.
openconfig-lldp
:
lldp
:
config
:
enabled
:
true
hello-timer
:
20
# seconds
system-name
:
r1.bbone
chassis-id-type
:
MAC_ADDRESS
interfaces
:
interface
:
xe-0/0/0
:
config
:
enabled
:
true
xe-0/0/1
:
config
:
enabled
:
true
xe-0/0/2
:
config
:
enabled
:
true
The pillar structure from Example 5-10 has been intentionally exemplified, anticipating the OpenConfig models, so we can reuse it here. With that said, we only need to define the SLS and execute as shown in Example 5-15.
interfaces_oc_config
:
napalm_yang.configured
:
-
data
:
{{
salt.pillar.get('openconfig-interfaces') | json
}}
-
models
:
-
models.openconfig_interfaces
Without having to write an environment-specific template, we have the possibility to execute the state and deploy the changes on the device, which creates exactly the same configuration to be loaded on the device, but with much less effort. See Example 5-16.
$
sudo salt'device1'
state.sls oc_interfaces device1: ---------- ID: interfaces_oc_config Function: napalm_yang.configured Result: True Comment: Configuration changed! Started: 16:46:59.262230 Duration: 7612.234 ms Changes: ---------- diff:[
edit]
+ interfaces{
+ xe-0/0/0{
+ description Interface1;
+ mtu 9000;
+ unit0
{
+ description Subinterface1;
+ family inet{
+ address 1.2.3.4/24;
+}
+}
+}
+}
Summaryfor
device1 ------------ Succeeded: 1 Failed: 0 ------------ Total states run: 1 Total runtime
: 7.612 s
Capirca is a mature open source library for multiplatform ACL generation, developed by Google. It simplifies the generation and maintenance of complex filters for more than ten operating systems, including the most common: Cisco IOS, IOS-XR, NX-OS, Arista, Juniper, Palo Alto, and Brocade.
The library requires a configuration file with a specific format, but in Salt the default interpreter is bypassed and we are able to define the data in the format preferred for the pillar, or external pillars in an external service.
The NetACL Salt state requires Capirca to be installed. For the firewall configuration of network devices, this is yet another alternative for quick development without much effort, but it requires a careful read of the documentation in order to understand the caveats. Again, if you discover limitations, consider contributing to Capirca as this is the beauty of open source.
Let’s assume we need to automate the configuration of a firewall filter that allows TCP
traffic from 1.2.3.4
on port 1717
, then counts, and finally rejects anything else.
The pillar structure and the fields are very naturally defined, but there are also platform-specific options such as counter or policer and we highly recommend consulting the wiki page.
acl
:
-
FILTER-EXAMPLE
:
terms
:
-
ALLOW-PORT
:
source_address
:
1.2.3.4
protocol
:
tcp
port
:
1717
action
:
accept
counter
:
ACCEPTED-PORT
-
DENY-ALL
:
counter
:
DENY-ALL
action
:
deny
The state SLS file is again very simple (Example 5-18).
state_filter_example
:
netacl.filter
:
-
filter_name
:
FILTER-EXAMPLE
-
pillar_key
:
acl
netacl.filter
is the name of the state function. The netacl
state module has three main functions available: filter
, to manage the configuration of a specific firewall filter; term
, for the management of a certain term inside a filter; and managed
, for the entire configuration of the firewall. The filter_name
field specifies the name of the filter (FILTER-EXAMPLE
, in this case), as configured in the pillar, and the pillar_key
field specifies the Pillar key where the firewall configuration is specified (acl
, in this case).
Example 5-19 shows everything required for executing the state.
$
sudo salt device1 state.sls firewall device1: ---------- ID: state_filter_example Function: netacl.filter Result: True Comment: Configuration changed! Started: 11:58:44.709472 Duration: 11879.601 ms Changes: ---------- diff:[
edit firewall family inet]
+ /* + **$Id
: state_filter_example$
+ **$Date
: 2017/06/15$
+ ** + */ + filter FILTER-EXAMPLE{
+ interface-specific;
+ term ALLOW-PORT{
+ from{
+source
-address{
+ 1.2.3.4/32;
+}
+ protocol tcp;
+ port 1717;
+}
+then
{
+ count ACCEPTED-PORT;
+ accept;
+}
+}
+ term DENY-ALL{
+then
{
+ count DENY-ALL;
+ discard;
+}
+}
+}
Summaryfor
device1 ------------ Succeeded: 1 Failed: 0 ------------ Total states run: 1 Total runtime
: 11.880 s
Moving forward, let’s enhance the ALLOW-PORT
term to allow also UDP
traffic over the 1719
port.
For flexibility reasons most of the fields can be either a single value or a list of values. With that said, we only need to transform the fields port
and protocol
to a list of values (Example 5-20).
acl
:
-
FILTER-EXAMPLE
:
terms
:
-
ALLOW-PORT
:
source_address
:
1.2.3.4
protocol
:
-
tcp
-
udp
port
:
-
1717
-
1719
action
:
accept
counter
:
ACCEPTED-PORT
-
DENY-ALL
:
counter
:
DENY-ALL
action
:
deny
The order of the terms is important! The configuration generated and loaded on the device reflects the order defined in the pillar.
In this chapter we have presented three of the most important Salt states for network automation. We encourage you to consider each of them and decide which one is the most suitable to fulfill the needs of your particular environment.
The Salt community also provides pre-written states under the name of Salt formulas, which can be downloaded and executed. The user only needs to populate the data into the Pillar. Formulas are also a good resource to learn best practices for maintainable states. Examples of such formulas include napalm-interfaces for interfaces configuration management on network devices, or napalm-install to install NAPALM and the underlying system dependencies, and so on.
3.147.52.33