Chapter 18

Programming Non-Cisco Platforms

In Chapter 17, “Programming Cisco Platforms,” you learned about using the programmable interfaces on a variety of different platforms from Cisco, such as routers and switches running Cisco IOS XE, IOS XR, and NX-OS, as well Cisco DNA Center, Meraki, and Webex Teams. In this chapter, you will extend your knowledge to some non-Cisco network operating systems. This chapter covers some of the popular and emerging network operating systems to give you an overview of how to introduce programmability to network management practices with specific vendor platforms.

General Approaches to Programming Networks

As you have already learned in this book, you can use various interfaces to manage network platforms. Some of them are more popular or ubiquitous than others. The following sections provide an overview of several interfaces used at non-Cisco network operating systems.

The Vendor/API Matrix

When you hear about network programmability, you might be inclined to associate it with specific protocols, such as NETCONF, YANG, or HTTP/REST. However, network programmability is a much broader topic. As mentioned in Chapter 1, “The Network Programmability and Automation Ecosystem,” programmability is the ability to manage platforms via programmable interfaces exposed by those platforms. Model-based programmability extends this paradigm to use data models. Network management involves tools and applications that add orchestration to the mix, and by using programmability, you can automate workflows involving several automated tasks performed in a specific sequence on, possibly, a number of different platforms. It’s crucial that you keep this in mind and not limit yourself by unnecessary boundaries because you will run into programmability in places you never thought existed.

Programmability goes hand in hand with the different ways that network platforms can be managed. Figure 18-1 illustrates a sample API vendor mapping for a number of network operating systems. It includes Nokia SR OS and Juniper Junos OS, which, together with Cisco IOS XR, cover the majority of the service provider market in the United States, and Arista EOS and Cumulus Linux, which are very popular for big data centers and cloud providers.

Figure 18-1 Sample Vendor/API Mapping

Figure 18-1 has two axes:

  • The X-axis lists the types of interfaces, which allow us to interoperate with network elements.

  • The Y-axis lists a subset of the vendors that exist on the market today.

If you operate a network built using a vendor not included in this figure, you should assess it based on the interfaces listed on the X-axis.

Programmability via the CLI

The most basic interface for interacting with network functions is the command-line interface (CLI). We cover the CLI under the umbrella of network programmability and automation because there are millions of legacy devices around the world that don’t support NETCONF, RESTCONF, or any of the other APIs discussed so far in this book. Not being able to programmatically manage a device is certainly not a good enough reason to throw away the device if the device is performing its primary task (such routing and/or switching) well; companies do not swap these legacy devices with newer ones just to implement programmability and network automation. Therefore, the first step toward programmability and automation for such companies is to adapt their network management logic to manage legacy devices using the CLI alongside more modern devices using NETCONF, RESTCONF, gNMI, or other APIs.

When the Internet was still in its early days, finding trustworthy information on how to configure network elements was a challenging task. The number of competent network engineers was also considerably lower than it is today. The CLI was humorously referred to as the cash line interface. The jokes were justified, as knowledge of the CLI was a rare commodity. Today the situation is very different from the situation a couple decades ago. There are plenty of excellent resources today, including documentation covering detailed configuration for platforms from any vendor, various video trainings, and independent multivendor blogs. The newer humorous name for the CLI is commodity line interface. Today, the CLI is easy to understand and learn to use, and it still plays a significant role in network management.

From a software-oriented point of view, consider the following capabilities of CLI-based configuration management:

  • The CLI configuration can be split into independent blocks.

  • The blocks can be parametrized in terms of what is a variable and what is a fixed value.

  • This configuration can be implemented in a network management system to convert the internal data modeling into the proper sequence of CLI commands.

Ansible provides a good example of CLI-based programmable network management, as you will see in Chapter 19, “Ansible.” In addition, tools such as Cisco’s NX-API CLI still involve using the CLI heavily to programmatically manage devices.

In short, in this era of programmability-based automation, knowledge of and ability to use the CLI are still very important.

Programmability via SNMP

For a long time, SNMP has played a crucial role in the monitoring of networks and IT infrastructure. Despite the introduction of streaming telemetry, SNMP continues to have an important role. For brand-new network equipment, streaming telemetry based on gRPC/gNMI is a much better solution. However, as a relatively new management protocol, gRPC/gNMI operates only in reasonably new software, and many legacy devices still require monitoring and rely heavily on SNMP.

SNMP was created to standardize information structures across different vendors. For example, SNMP has a standard ISO MIB tree, which is the same across all the vendors implementing it. This is quite convenient because it means you can collect the information in the same format (counter names, sizing, and so on) and using the same SNMP OID from your Cisco, Juniper, Nokia, and Cumulus devices. When this approach was created, it was a breakthrough. Even today, in the era of open-source products and collaboration, fully standardized YANG models (for example, IETF, OpenConfig) are not yet implemented across all major vendors.

Besides offering monitoring capabilities, SNMP also provides an opportunity to manage network elements. Actually, it represents the first attempt to make a network element programmable by just conveying the value of the specific parameter to a particular SNMP OID. If you think about it abstractly (not looking into a particular realization), that is what is happening today with NETCONF/RESTCONF, only via different channels. It’s essential to acknowledge that the configuration capabilities of SNMP are minimal compared to those of the CLI. In addition, there is no feature parity in terms of configuration available via the CLI and via SNMP, and the CLI is an absolute winner.

Programmability via the Linux Shell

Linux is the number-one operating system for building highly available infrastructure for the most demanding and resource-intensive applications around the world. For a long time, it was an operating system for servers; that’s why Linux was not a subject of discussion in the networking arena, although each Linux distribution has a complete network stack, including routing and firewalling. Each of the most popular distributions, including Red Hat Enterprise Linux/CentOS, Ubuntu, and Debian, has a CLI, commonly referred to as the Linux shell. All the distributions are capable of running shell scripts natively, so Linux provides programmability out-of-the-box. When using a shell, you can define variables, use cycles and conditions, and perform other activities related to programming. Linux and scripting using Bash are covered extensively in Chapters 2, “Linux Fundamentals,” 3, “Linux Networking and Security,” and 4, “Linux Scripting.” The only point that may be added here is that the Linux shell has proven to be a more programmable interface than the regular router/switch CLI, as discussed in Chapter 4.

Programmability via NETCONF

NETCONF operates on XML-wrapped data sent over SSH. For some time, the XML API was the top choice for communication between the different components of distributed applications.

In general, if you take a broader look at the IT/network ecosystem, a network device can be treated equally to any other part of the infrastructure supporting distributed applications, such as an HTTP server or a database service. Approaches and protocols to manage the network functions could be the same as between application components. This is exactly what is happening: All the protocols that were good for communication between traditional IT applications are coming to the network world. NETCONF (which is XML based) and RESTCONF (which is JSON/XML based) are perfect examples of this.

NETCONF (which is discussed in detail in Chapter 14, “NETCONF and RESTCONF”) is one of the newer protocols in the network programmability arena. NETCONF has overcome the limitations of SNMP in terms of feature parity with the CLI. Depending on the vendor, the feature parity between the CLI and NETCONF could be as high as 100%. A significant factor in this success is YANG, which is covered in Chapter 13, “YANG.” In the NETCONF/YANG framework, a network device is represented as a set of parameters with specific values that can be managed in individual or interdependent mode. NETCONF was the first protocol to use YANG, and so today, programmability in the network world is sometimes equated to NETCONF/YANG.

Programmability via RESTCONF and REST APIs

RESTCONF is another major programmability-related protocol. RESTCONF is a RESTful API. Today, REST APIs are a de facto standard for communication between various applications, and almost all modern applications either support them or plan to. For instance, management of Docker, the infrastructure for containers, is realized over REST APIs. As another example, InfluxDB, one of the market-leading time-series databases (a crucial component for streaming telemetry), is also managed using REST APIs; InfluxDB communicates with other components of the telemetry stack, such as a telemetry collector or visualization dashboard over REST APIs. In the network world, it is possible to manage Cisco NSO or DNA Center using REST APIs on the northbound interface.

The popularity of REST APIs in general (and RESTCONF as a specific use case) is due to the fact that it is simple, works fast, and has CRUD (create, read, update, delete) capabilities to work with the data. REST APIs (and RESTCONF) are realized over HTTP transport, typically protected by TLS. Therefore, RESTCONF has a different application layer protocol (SSH) than NETCONF (HTTP). (RESTCONF is covered in detail in Chapter 14, and HTTP, REST, and TLS are covered in Chapter 7, “HTTP and REST,” and Chapter 8, “Advanced HTTP.”)

RESTCONF and NETCONF are similar in their use of YANG data models. In reality, a particular network element has a single YANG data model, which defines the data structure and the interdependencies of the parameters. Given that a network device may support both NETCONF and RESTCONF interfaces, it doesn’t matter how the data is transmitted: Transmission can occur over SSH/XML (NETCONF) or HTTPS/JSON/XML (RESTCONF).

However, to complete the picture, it should be noted that RESTCONF/YANG support is still in an emerging stage as of this writing. Across non-Cisco platforms, RESTCONF support is not common. For instance, Juniper Junos OS and Arista EOS support RESTCONF, and Nokia does not. Nevertheless, the general trend is toward adapting advanced and useful application concepts, and we will see more RESTCONF implementations in the next few years.

Programmability via gRPC/gNMI

Among all the protocols involved in working with YANG data models, gNMI is the most recently developed one. gNMI stands for gRPC Network Management Interface, and, as the name indicates, gNMI uses gRPC as the protocol for remote procedure calls based on HTTP/2 transport to communicate between the server and the network devices. Like RESTCONF, gNMI also has CRUD capabilities. Despite its novelty, gNMI has an extremely strong following and support in the community, and it is becoming adopted widely. Google is its main supporter and is also leading the development of both gNMI and OpenConfig YANG data models with two essential requirements in mind: low latency and scalability. As you can imagine, Google applies these two requirements to all its applications.

gNMI has a big advantage over NETCONF and RESTCONF in that its low-latency requirement and the way the data is packed for transmission make it very useful for streaming telemetry, where there is much information to be sent from each network function to the telemetry collector for processing. (See Chapter 15, “gRPC, Protobuf, and gNMI,” for details.) All the major vendors, including Cisco, Juniper, and Nokia, have implemented gNMI (and therefore gRPC) as a primary protocol for streaming telemetry. There are other options available, such as TCP or NETCONF telemetry streaming, but they requires far more compute resources on the network devices to generate messages or produce more administrative traffic overhead due to inefficient serialization. Therefore, gNMI is actively used for telemetry. Due to its use for telemetry collection and network management, gNMI is becoming very popular.

gNMI adoption for the management of the network devices is expected to continue to increase. Having several management protocols—such as NETCONF, RESTCONF, and gNMI—running all together is not necessarily the best usage of a device’s resources. Given that all CRUD functions can be realized using a single protocol, it might be that in future we will see unification of all processes to gNMI or RESTCONF.

Implementation Examples

Now that you have read about the available APIs, this section guides you through some real implementations with vendors other than Cisco. Some of these implementations might seem quite familiar to you, and others may not. Nevertheless, the following sections will help you follow the development of programmability in the network world.

Converting the Traditional CLI to a Programmable One

The main workhorse in the process of turning the CLI into a programmable interface is the management host. It can be an SDN controller or any other host running some application that is capable of managing the network function. In Chapter 20, “Looking Ahead,” you will learn how easily you can program virtually everything in your network, even with the CLI. Ansible enables you to implement programmability easily, but even without it, you can reach some level of programmability. Earlier in this book, you learned about Linux scripting and Python programming, and you can use that knowledge to start programming network elements.

Let’s say that you need to create an interface on an Arista switch. Example 18-1 shows the syntax for doing so.

Note

The IP address used in this example is arbitrary, and you can use whatever IP address you need.

Example 18-1 Creating the Loopback Interface in Arista EOS

EOS1# show run int lo678
interface Loopback678
   ip address 192.168.192.168/32
!
end

In this example, it is important to understand the sequence of the commands you enter in order to get the interface up and running as well as the variables and the fixed text. The variables in Example 18-1 are the interface name (Loopback678 in this case) and the IP address (192.168.192.168/32). There may be another variable that you can’t see in Example 18-1: the network device itself, where the interface is supposed to be configured. It can be either the hostname, if the management host knows how to reach it, or the IPv4/IPv6 address.

If you know the sequence of the commands and their syntax, you can create a simple Bash script that can create the interface for you. Example 18-2 shows an example of a possible script.

Example 18-2 Bash Script for Creating the Interface in Arista EOS

$ cat create_interface.sh
#!/bin/bash
INTERFACE_NAME=${1} INTERFACE_IPV4=${2} NETWORK_FUNCTION=${3}
echo "Interface with the name "${INTERFACE_NAME}" and IP "${INTERFACE_IPV4}" is to be created on "${NETWORK_FUNCTION}""
ssh aaa@${NETWORK_FUNCTION} << EOF enable configure terminal interface ${INTERFACE_NAME} ip address ${INTERFACE_IPV4} end show ip interface ${INTERFACE_NAME} EOF

Keep in mind that this script needs to be created on a management host rather than on the network element itself. The management host could be your laptop or any jump server in your network that can reach the target network function over SSH.

This script reads a sequence of three arguments, where the first one (${1}) contains the name of the interface, the second one (${2}) contains the IPv4 address of the interface, and the 964third one (${3}) contains the hostname of the management IP address of the target network device. In this specific case, the position of the arguments is important; however, you could extend the script by adding the specific identifier as a part of the argument to tell Bash about the argument type, regardless of its position.

When the script receives all the arguments, it prints them to stdout for your information and then connects to the network device from the third argument by using non-interactive SSH mode and passes the configuration lines contained between the tags <<EOF and EOF created using the first two arguments. Example 18-3 demonstrates the execution of the Bash script from Example 18-2.

Example 18-3 Creating an Interface in Arista EOS Using the Bash Script from Example 18-2

$ ./create_interface.sh Loopback123 172.16.0.0/32 EOS1
Interface with the name "Loopback123" and IP "172.16.0.0/32" is to be created on
  "EOS1"
Pseudo-terminal will not be allocated because stdin is not a terminal.
Password:
Loopback123 is up, line protocol is up (connected)
  Internet address is 172.16.0.0/32
  Broadcast address is 255.255.255.255
  IPv6 Interface Forwarding : None
  Proxy-ARP is disabled
  Local Proxy-ARP is disabled
  Gratuitous ARP is ignored
  IP MTU 65535 bytes

In Example 18-3, you can see that the interface is successfully created using the Bash script from Example 18-2. The last command in the script creates the output of the show ip interface command for the created interface.

As discussed earlier, you could add many enhancements to this script, such as the one already mentioned for the sequence of the arguments or a more sophisticated configuration using loops and conditions. Two factors are crucial for such automation:

  • You must know how to configure network elements with the CLI.

  • You must know how to create the scripts.

The latter is particularly important because even if a script is working with one of the network vendors, it may not work with another one. One of the reasons is that the shell script expects to get the response in a certain way. For example, it might wait for the Password: pattern in the output before it starts entering the password. In addition, it may expect other matches, such as a > or $, before it starts sending the commands in the script. You would need to create proper updates to your script for each managed vendor.

Now say that you want to turn the script from Example 18-2 into a multivendor one. If you know the CLI and the programming/scripting, you can easily do that. Example 18-4 shows 965how to extend the initial Bash script so that it’s capable of configuring both Arista EOS and Cumulus Linux switches.

Example 18-4 Adding Multivendor Capabilities to the Bash Script from Example 18-2

$ cat create_interface.sh
#!/bin/bash
INTERFACE_NAME=${1} INTERFACE_IPV4=${2} NETWORK_FUNCTION=${3} VENDOR=${4}
echo "Interface with the name "${INTERFACE_NAME}" and IP "${INTERFACE_IPV4}" is to be created on "${NETWORK_FUNCTION}""
case ${VENDOR} in arista*) ssh aaa@${NETWORK_FUNCTION} << EOF enable configure terminal interface ${INTERFACE_NAME} ip address ${INTERFACE_IPV4} end show ip interface ${INTERFACE_NAME} EOF ;; cumulus*) ssh cumulus@${NETWORK_FUNCTION} << EOF net add interface ${INTERFACE_NAME} ip address ${INTERFACE_IPV4} net commit ip addr show dev ${INTERFACE_NAME} EOF ;; esac

You should already know the shell syntax used in this example. A fourth variable (${4}), which contains the vendor name, is used with conditional case syntax that defines the proper CLI commands for each vendor. (See Chapter 4 for more details on case.)

If you execute this script against a switch running Cumulus Linux, you see output similar to the output in Example 18-5.

Example 18-5 Executing the Bash Script with Multivendor Capabilities

$ ./create_interface.sh swp1 169.254.0.0/31 192.168.141.156 cumulus
Interface with the name "swp1" and IP "169.254.0.0/31" is to be created on
  "192.168.141.156"
Pseudo-terminal will not be allocated because stdin is not a terminal.
[email protected]'s password:
Welcome to Cumulus VX (TM)
Cumulus VX (TM) is a community supported virtual appliance designed for experiencing, testing and prototyping Cumulus Networks' latest technology. For any questions or technical support, visit our community site at: http://community.cumulusnetworks.com
The registered trademark Linux (R) is used pursuant to a sublicense from LMI, the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide basis. --- /etc/network/interfaces 2019-06-10 13:24:38.928437459 +0000 +++ /run/nclu/ifupdown2/interfaces.tmp 2019-06-10 13:33:29.702733758 +0000 @@ -7,16 +7,17 @@ auto lo iface lo inet loopback
# The primary network interface auto eth0 iface eth0 inet dhcp vrf mgmt
auto swp1 iface swp1 + address 169.254.0.0/31
auto mgmt iface mgmt address 127.0.0.1/8 vrf-table auto

net add/del commands since the last "net commit" ================================================ User Timestamp Command ------- -------------------------- ---------------------------------------------- cumulus 2019-06-10 13:33:29.680811 net add interface swp1 ip address 169.254.0.0/31 3: swp1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:50:56:32:44:29 brd ff:ff:ff:ff:ff:ff inet 169.254.0.0/31 scope global swp1 valid_lft forever preferred_lft forever inet6 fe80::250:56ff:fe32:4429/64 scope link valid_lft forever preferred_lft forever

As these examples illustrate, multivendor programmability is not very difficult. These examples illustrate another essential concept: abstraction. These examples include the same set of parameters for both Arista and Cumulus switches, correctly deployed in each network operating system. In addition, you have seen the so-called adaptor level, which translates abstract parameters into the particular syntax of the target network operating system.

Implementing similar programmability with Python or Ansible would be much easier because these solutions include relevant libraries to manage connectivity to different vendors. However, this section helps you see that programmability can be accomplished even without those tools.

Classical Linux-Based Programmability

The next step in the programmability of the various network devices is the management of Linux if it’s the core operating system. As mentioned at the beginning of the chapter, as networks are moving toward including white box switches, Linux is becoming a network operating system. For example, at a high level of abstraction, Cumulus took the Debian Linux distribution and created a product that includes a unique CLI. In the vast majority of cases, networks rely on the open-source products, such as Debian Linux or the IP routing protocol suite called FRR.

A bit earlier in this chapter, you saw how to create an interface in Cumulus Linux by using its CLI (refer to Example 18-5). Cumulus Linux is a Linux distro, so all the concepts and tools explained in Chapters 2 through 4 apply to it. This means that interface configuration using a programmable CLI can also be reproduced using the individual files. The interfaces configuration in Cumulus Linux (and Debian in general) is stored in the file /etc/network/interfaces and looks as shown Example 18-6.

Example 18-6 File with the Interface Configuration in Linux

cumulus@cumulus:mgmt-vrf:~$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*.intf # The loopback network interface auto lo iface lo inet loopback
# The primary network interface auto eth0 iface eth0 inet dhcp vrf mgmt
auto swp1 iface swp1 address 169.254.0.0/31
auto mgmt iface mgmt address 127.0.0.1/8 vrf-table auto

By default, only the interfaces that have non-standard configurations are listed here because the Cumulus switch used in this example has more interfaces than are shown in the file /etc/network/interfaces. This can be verified by using the standard Linux command shown in Example 18-7.

Example 18-7 Output of the Interface Configuration in Linux

cumulus@cumulus:mgmt-vrf:~$ ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
  group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master mgmt
  state UP mode DEFAULT group default qlen 1000
    link/ether 00:50:56:30:a2:99 brd ff:ff:ff:ff:ff:ff
3: swp1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode
  DEFAULT group default qlen 1000
    link/ether 00:50:56:32:44:29 brd ff:ff:ff:ff:ff:ff
4: swp2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group
  default qlen 1000
    link/ether 00:50:56:2b:27:db brd ff:ff:ff:ff:ff:ff
5: swp3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group
  default qlen 1000
    link/ether 00:50:56:3a:14:d8 brd ff:ff:ff:ff:ff:ff
6: swp4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group
  default qlen 1000
    link/ether 00:50:56:29:f8:1b brd ff:ff:ff:ff:ff:ff
7: swp5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group
  default qlen 1000
    link/ether 00:50:56:33:1a:43 brd ff:ff:ff:ff:ff:ff
8: mgmt: <NOARP,MASTER,UP,LOWER_UP> mtu 65536 qdisc pfifo_fast state UP mode DEFAULT
  group default qlen 1000
    link/ether b6:b7:20:53:a6:dd brd ff:ff:ff:ff:ff:ff

In Example 18-7, you can see the names of the ports available. All the ports called swp* are used for data plane forwarding. To add an IP address to any of these ports without using the CLI, you need to add the corresponding lines in the file /etc/network/interfaces and reapply the configuration by using the ifreload -a command. This sequence of actions forms a step-by-step procedure that you could implement directly on a network device. To do this, you would need to connect to it and use a Python or Bash script from your management host. Example 18-8 shows a Bash script that performs the file handling.

Example 18-8 Bash Script for the Interface Configuration in Linux

$ cat manage_interaces.sh
#!/bin/bash
INTERFACE_NAME=${1} INTERFACE_IPV4=${2} NETWORK_FUNCTION=${3} VENDOR=${4}
echo "Interface with the name "${INTERFACE_NAME}" and IP "${INTERFACE_IPV4}" is to be created on "${NETWORK_FUNCTION}""
case ${VENDOR} in cumulus*) ssh cumulus@${NETWORK_FUNCTION} "sudo chown cumulus:cumulus /etc/network/interfaces" scp cumulus@${NETWORK_FUNCTION}:/etc/network/interfaces . echo "auto ${INTERFACE_NAME} iface ${INTERFACE_NAME} address ${INTERFACE_IPV4}" >> interfaces scp interfaces cumulus@${NETWORK_FUNCTION}:/etc/network/interfaces ssh cumulus@${NETWORK_FUNCTION} "sudo chown root:root /etc/network/interfaces;sudo ifreload -a;ip addr show dev ${INTERFACE_NAME}" ;; esac

The input parameters and general logic of the script in this example are the same as in Example 18-4. The difference between these examples is in the set of actions performed. This is what happens in Example 18-8:

  1. The owner of the configuration file with the interface configuration is set to a value equal to the username, which is used for SSH access.

  2. The configuration file is copied locally to the management host over SCP. This action allows you to get the existing set of interfaces and validate or analyze them if needed.

  3. The configuration of the new interface is appended to this configuration file. This step ensures that the configuration file is accurate and complete.

  4. The updated file is uploaded back to the network element so that the new validated set of interfaces is put on the device.

  5. Once the file is uploaded, its owner is changed back to the root, the interfaces are reloaded, and the status of the changed interface is displayed.

Example 18-9 shows the execution of the Bash script from Example 18-8.

Example 18-9 Programmable Management of the Linux Files

$ ./manage_interaces.sh swp2 169.254.0.2/31 192.168.141.156 cumulus
Interface with the name "swp2" and IP "169.254.0.2/31" is to be created on "192.168.141.156"
[email protected]'s password:
[email protected]'s password:
interfaces                                                      100%  437
  602.4KB/s   00:00
[email protected]'s password:
interfaces                                                      100%  486
  38.1KB/s   00:00
[email protected]'s password:
4: swp2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
  default qlen 1000
    link/ether 00:50:56:2b:27:db brd ff:ff:ff:ff:ff:ff
    inet 169.254.0.2/31 scope global swp2
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe2b:27db/64 scope link tentative
       valid_lft forever preferred_lft forever

The result shown in Example 18-9 is the desired result, as the interface configured here has the correct IP address. What is not desired—and this a major drawback of using a shell—is that you need to enter the password each time you interact with the remote host on SSH/SCP. Even with this simple script, you would have to enter the password four times.

The easiest way to avoid entering a password multiple times is to use Python or Ansible, which are able to store the credentials and use them when needed. However, if you still want to use Bash, you can use SSH with RSA keys instead of passwords. To implement it, you need to generate a pair of private/public keys on your management host and send the public key to the target managed host so the management host can establish an SSH tunnel to the managed host without the password. Figure 18-2 illustrates the SSH architecture with RSA keys.

Figure 18-2 SSH with RSA Keys

Example 18-10 shows how you can deploy SSH access with RSA keys instead of a password.

Example 18-10 Deploying SSH Access with RSA Keys Instead of a Password

# Generate the SSH RSA Key Pair on the local management host
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/aaa/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/aaa/.ssh/id_rsa.
Your public key has been saved in /home/aaa/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:dspRSyuwkcqiRBLHgItQaabOHR0c7Wgdx9IhBlKT+HY [email protected]
The key's randomart image is:
+---[RSA 2048]----+
|++oo+==oo..      |
|oo=..+o=.+       |
|+*  o O + o      |
|*  o B E o o     |
|o.o * o S +      |
|.+ o   o =       |
|.       o        |
|                 |
|                 |
+----[SHA256]-----+
# Create the folder for SSH keys on the remote managed host $ ssh [email protected] "mkdir /home/cumulus/.ssh/"
# Copy the public RSA key to the remote managed host $ scp /home/aaa/.ssh/id_rsa.pub [email protected]:/home/cumulus/.ssh/ authorized_keys [email protected]'s password: id_rsa.pub 100% 405 462.7KB/s 00:00
# Test the SSH access using the RSA keys without password to the remote managed host $ ssh [email protected]
Welcome to Cumulus VX (TM)
Cumulus VX (TM) is a community supported virtual appliance designed for experiencing, testing and prototyping Cumulus Networks' latest technology. For any questions or technical support, visit our community site at: http://community.cumulusnetworks.com
The registered trademark Linux (R) is used pursuant to a sublicense from LMI, the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide basis. Last login: Mon Jun 10 14:19:27 2019 from 192.168.141.144 cumulus@cumulus:mgmt-vrf:~$

The comments in Example 18-10 help you understand each step. As you can see, you need to provide the password for the SSH command when you create the folder for SSH keys on the remote host, as well as when you copy the key file there over SCP. However, you don’t need it anymore when you perform the SSH connectivity once more at the end of the output.

After applying the keys, if you now redo the execution of the configuration script from Example 18-9, the configuration process is truly automated, as shown in Example 18-11.

Example 18-11 Programmable Management of Linux Files with SSH Using RSA Keys

$ ./manage_interaces.sh swp3 169.254.0.4/31 192.168.141.156 cumulus
Interface with the name "swp3" and IP "169.254.0.4/31" is to be created on "192.168.141.156"
interfaces                                                      100%  486
  476.9KB/s   00:00
interfaces                                                      100%  534
  644.7KB/s   00:00
5: swp3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
  default qlen 1000
    link/ether 00:50:56:3a:14:d8 brd ff:ff:ff:ff:ff:ff
    inet 169.254.0.4/31 scope global swp3
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe3a:14d8/64 scope link tentative
       valid_lft forever preferred_lft forever

You might argue that the examples with the interface configuration are straightforward. The point would be fair enough. However, the core idea of any programmability is to split the whole configuration process into small and easy steps that can be templated and parametrized and then deployed in a replicable manner.

There is no right or wrong way to do automation. Different contexts require different solutions. The next section looks at a newer way to manage the network functions that might be best in some situations: by using NETCONF/YANG.

Managing Network Devices with NETCONF/YANG

Together with YANG data models, the NETCONF protocol enables you to manage network devices in a programmable way. The main difference between the NETCONF/YANG approach and the traditional CLI configuration approach is that the data structure is more consistent with NETCONF/YANG. When you use NETCONF/YANG, you define the particular node that contains the data. This operation is precise, so you don’t need to take care of the sequence of the commands, as you would need to do with CLI templates.

In the examples so far in this chapter, you have learned how to manage network elements via the CLI by using shell scripts. These CLI commands are sent using non-interactive SSH sessions. A NETCONF session, on the other hand, requires extensive message exchange, so it is impossible to use shell scripts for management of network devices via NETCONF.

Let’s continue our example of creating an interface with an associated IP address, but now let’s look at NETCONF/YANG and another vendor. This time, let’s consider the Nokia SR OS running on a Nokia SR 7750 router. Currently, Nokia supports a vendor-proprietary YANG module and a subset of the OpenConfig modules. Nokia officially published the YANG modules in October 2019, long after other vendors published their modules. Without published YANG modules, the life of a developer is difficult. Before modules are published, a developer must perform reverse engineering, which involves configuring something in the network element and then extracting the configuration over NETCONF to get the YANG model for a specific context. Example 18-12 shows an exchange of NETCONF hello messages between the management host and the Nokia router.

Example 18-12 NETCONF Hello Exchange

$ ssh  admin@nokia_router -p 830 -s netconf
[email protected]'s password: # NETCONF HELLO sent by the remote managed host <?xml version="1.0" encoding="UTF-8"?> <hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <capabilities> <capability>urn:ietf:params:netconf:base:1.0</capability> <capability>urn:ietf:params:netconf:base:1.1</capability> <capability>urn:ietf:params:netconf:capability:writable-running:1.0
</capability> <capability>urn:ietf:params:netconf:capability:notification:1.0</capability> ! The output is truncated for brevity <capability>urn:alcatel-lucent.com:sros:ns:yang:conf- r13&amp;module=alu-conf-r13&amp;revision=2019-02-13</capability> ! </capabilities> <session-id>100</session-id> </hello> ]]>]]>
# NETCONF HELLO sent by the local management host <?xml version="1.0" encoding="UTF-8"?> <hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <capabilities> <capability>urn:ietf:params:netconf:base:1.0</capability> </capabilities> </hello> ]]>]]>

As you learned earlier, the character sequence ]]>]]> indicates the end of the NETCONF messages (see RFC 6241. This example involves manual interaction with the remote managed host over NETCONF. Both Python and Ansible have specific modules that handle such interactions, and you only need to focus on the content of your messages. Each message begins with # and an explanation of the intent of the message.

When the exchange of the NETCONF hello messages is finished, you can start applying the configuration, as illustrated in Example 18-13.

Example 18-13 Configuring the Loopback Interface in the Nokia SR OS Using NETCONF

# NETCONF EDIT-CONFIG message with the loopback details
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
<target>
<running/>
</target>
<config>
<configure xmlns="urn:alcatel-lucent.com:sros:ns:yang:conf-r13">
 <router>
  <interface>
   <interface-name>test_loopback</interface-name>
   <address>
    <ip-address-mask>192.168.192.168/32</ip-address-mask>
   </address>
   <loopback>true</loopback>
   <shutdown>false</shutdown>
  </interface>
 </router>
</configure>
</config>
</edit-config>
</rpc>
]]>]]>
# NETCONF OK confirmation from the remote managed host <?xml version="1.0" encoding="UTF-8"?> <rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <ok/> </rpc-reply> ]]>]]>

In the case of NETCONF, it’s straightforward to separate the variables from the rest of the XML message. The variables are the values inside the <> framing; these keys are the configuration nodes. The variables in Example 18-13 are the same as the variables in the previous examples: the interface name and the IPv4 addresses. In addition, in Example 18-13 you can see two more values that were not used before:

  • The administrative state of the interface defines whether it’s up or down. Theoretically, this value could have been defined earlier, but it was omitted for brevity. In Nokia SR OS, the newly created interface is disabled by default, however, so you need to bring it up manually.

  • The interface loopback is or is not identified. Nokia SR OS handles interface naming and configuration differently than Cisco. With Nokia SR OS, you can create a Layer 3 interface with a name you like and then configure it as a loopback or map a physical port to it.

To verify that the interface is configured properly, you can check the output of the CLI from the router, as shown in Example 18-14.

Example 18-14 Verifying the Loopback Interface Configuration on the Nokia Router

*A:nokia_router# show router interface
===============================================================================
Interface Table (Router: Base)
===============================================================================
Interface-Name                   Adm       Opr(v4/v6)  Mode    Port/SapId
   IP-Address                                                  PfxState
-------------------------------------------------------------------------------
! The output is truncated for brevity
test_loopback                    Up        Up/Down     Network loopback
   192.168.192.168/32                                          n/a
! The output is truncated for brevity
-------------------------------------------------------------------------------
Interfaces : 4
===============================================================================

After verifying the configuration, the next step is to create a script, using Ansible or Python, to manage the network functions with the variables. Example 18-15 provides the Python script. (Ansible is covered in Chapter 19.)

Example 18-15 Python Script to Create the Interface in Nokia SR OS

$ cat nokia_netconf.py
import sys, warnings
warnings.simplefilter("ignore", DeprecationWarning)
from ncclient import manager
# Variables NOKIA_USERNAME = 'admin' NOKIA_PASSWORD = 'admin' NOKIA_PORT = 830 # User functions def npf(host, ip_address, if_name): xml_conf = """<config> <configure xmlns="urn:alcatel-lucent.com:sros:ns:yang:conf-r13"> <router> <interface> <interface-name>%s</interface-name> <address> <ip-address-mask>%s</ip-address-mask> </address> <loopback>true</loopback> <shutdown>false</shutdown> </interface> </router> </configure> </config>""" % (if_name, ip_address)
conn = manager.connect(host=host, port=NOKIA_PORT, username=NOKIA_USERNAME, password=NOKIA_PASSWORD, timeout=10, device_params={'name': 'alu'}, hostkey_verify=False)
conn.edit_config(target='running', config=xml_conf)
conn.close_session()
# User functions if __name__ == '__main__': npf(sys.argv[1], sys.argv[2], sys.argv[3])

The syntax of the Python is quite different from that of Bash, but the structure is similar. As you learned in Chapter 5, “Python Fundamentals,” at the beginning of the snippet, you define the modules you are going to use, as well as some variables (username, password, and port). Then the function configures Nokia SR OS over NETCONF using the Nokia YANG data model. The template is the same as in Example 18-14, but this time there are two variables, if_name and ip_address, placed in the template marked with the %s symbol. To connect with the network function over NETCONF, the specific Python module ncclient (https://pypi.org/project/ncclient/) is used. This module is specially developed by the community to manage devices over NETCONF, and it includes all the relevant handlers. The function of this module in this example is to connect to the network and send the appropriate configuration to a particular datastore. If you compare Examples 18-14 and 18-15, you will see many similarities.

It is worth pointing out that parameters such as the hostname of the network function, the name of the interface, and its IPv4 address are provided as arguments during the launch of the script, as shown in Example 18-16.

Example 18-16 Executing and Checking the Script

$ sudo python nokia_netconf.py secgw-5.viplabs.de 192.168.192.169/32 npf_lo
$
*A: nokia_router# show router interface =============================================================================== Interface Table (Router: Base) =============================================================================== Interface-Name Adm Opr(v4/v6) Mode Port/SapId IP-Address PfxState ------------------------------------------------------------------------------- npf_lo Up Up/Down Network loopback 192.168.192.169/32 n/a ! The output is truncated for brevity test_loopback Up Up/Down Network loopback 192.168.192.168/32 n/a ! The output is truncated for brevity ------------------------------------------------------------------------------- Interfaces : 5 ===============================================================================

If the execution of the Python script is successful, you get no notification. However, in the event of failure, as you learned in Chapter 14, you get notifications that help you understand what went wrong.

The script in Example 18-16 is straightforward, but it gives you valuable insight. If your network functions support NETCONF/YANG, you can manage the network as described here by modifying the YANG data structure in the Python script.

Managing Network Devices with RESTCONF/YANG

Recall that RESTCONF uses HTTPS as a transport protocol, with data encoded directly in JSON format; this makes RESTCONF a perfect protocol for all kinds of programmability.

There are several points you need to consider when you plan to use RESTCONF:

  • RESTCONF requires SSL encryption, so you need to create either self-signed or CA-signed SSL certificates on your target network function to be able to connect to it.

  • HTTPS-based applications don’t use credentials to access the network devices directly; instead, they use Base64-encoded username/password pairs. Therefore, you need to transform your credentials into this format.

  • RESTCONF uses HTTP transport, so you need to become comfortable with HTTP request types.

This section guides you through these considerations, using Arista EOS to configure a loopback interface.

Example 18-17 shows how to create SSL certificates and enable RESTCONF.

Example 18-17 Preparing for RESTCONF/YANG Usage

# Generate certificate/key pair
EOS1#security pki key generate rsa 4096 RESTCONF_KEY
EOS1#security pki certificate generate self-signed RESTCONF_CERT key RESTCONF_KEY parameters common-name "rest_test.acme.com"
certificate:RESTCONF_CERT generated
# Enable RESTCONF with created cert/key pair (only relevant output is provided) EOS1#show run | section management management security ssl profile RESTCONF_SEC_PROFILE certificate RESTCONF_CERT key RESTCONF_KEY ! management api restconf transport https default ssl profile RESTCONF_SEC_PROFILE
# Verification the RESTCONF works correctly EOS1#show management api restconf Enabled: Yes Server: running on port 6020, in default VRF SSL Profile: RESTCONF_SEC_PROFILE QoS DSCP: none
# Create credentials for remote access EOS1#show run | grep 'username' username aaa privilege 15 secret aaa

In the PKI architecture, you always need a certificate/key pair. Therefore, in Example 18-17, you need to create this as a first step. The second step is to associate the generated certificate/key pair with an SSL security profile, which in the third step is associated with the RESTCONF configuration. You can also specify a port that RESTCONF processes listen to. If you don’t do this (as shown in Example 18-17), the default port value, TCP port 6020, is used. The last thing you need to do is to create the credentials that will be used with RESTCONF.

Now that the network devices are prepared for RESTCONF operation, you need to understand how to authenticate your REST requests to a network device. The REST API doesn’t support the user credentials per se, as you learned in Chapter 14. However, it has the concept of the Authorization headers, which are part of each message sent to the network device. Authentication can be achieved in different ways (for example, using API tokens), and one of the easiest ways is to use a basic authorization type, such as the string username:password encoded in Base64 format, with this value then added to the authorization header. Figure 18-3 shows a simple way to convert credentials into Base64 format using the free online tool available at www.base64encode.org.

Figure 18-3 Converting a username:password Tuple to Base64 Format

As shown in this figure, you put your string precisely in the username:password form configured on the device earlier (refer to Example 18-17). Then you just click the Encode button, and at the bottom of the screen, you see the encoded value, which you can then use in the Authorization header in a RESTCONF message.

The final step before you can get or set the configuration over RESTCONF is to determine the address of the device to be configured (basically, the URI you need to access). As you learned in Chapter 14, according to RFC 8040, the URI of the node is typically an address in the format https://hostname/restconf/data/yang_module:container.

For information about supported YANG modules for a particular vendor, you can typically look at the GitHub pages or release notes for the version of the network operating system that is being used (refer to Chapter 13). The current example uses Arista, and Figure 18-4 shows the GitHub page where you can find the needed information.

Figure 18-4 Arista YANG Modules

As indicated in this figure, Arista supports OpenConfig YANG modules. OpenConfig is a core YANG data model that Arista uses in its product. OpenConfig YANG data models implemented in Arista EOS have various augmentations and deviations, but there are no vendor-proprietary YANG modules.

In Chapter 13, you learned how to find the name of a YANG module and the name of a container. Using that knowledge and what you have learned about RESTCONF authorization in this chapter, you can create a RESTCONF request like the one shown in Example 18-18.

Example 18-18 RESTCONF GET Request Using an Authorization Header

$ curl -X GET https://EOS1:6020/restconf/data/openconfig-interfaces:interfaces
  --header "Authorization: Basic YWFhOmFhYQ==" --insecure | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2321    0  2321    0     0   9070      0 --:--:-- --:--:-- --:--:--  9101
{
    "openconfig-interfaces:interface”: [{
! The output is truncated for brevity
        "config": {
            "arista-intf-augments:load-interval": 300,
            "description": "",
            "enabled": true,
            "loopback-mode": true,
            "name": "Loopback0",
            "openconfig-vlan:tpid": "TPID_0X8100",
            "type": "softwareLoopback"
        },
! The output is truncated for brevity
        "name": "Loopback0",
        "xstate": {
            "enabled": true,
            "loopback-mode": false,
            "openconfig-vlan:tpid": "TPID_0X8100"
        },
        "subinterfaces": {
            "subinterface": [
                {
                    "config": {
                        "enabled": true,
                        "index": 0
                    },
                    "index": 0,
                    "openconfig-if-ip:ipv4": {
                        "addresses": {
                            "address": [
                                {
                                    "config": {
                                        "ip": "10.0.0.22",
                                        "prefix-length": 32
                                    },
                                    "ip": "10.0.0.22",
                                    "state": {
                                        "ip": "10.0.0.22",
                                        "prefix-length": 32
                                    }
                                }
                            ]
                        },
                        "config": {
                            "dhcp-client": false,
                            "enabled": true,
                            "mtu": 1500
                        },
                        "proxy-arp": {
                            "config": {
                                "mode": "DISABLE"
                            },
                            "state": {
                                "mode": "DISABLE"
                            }
                        },
                        "state": {
                            "dhcp-client": false,
                            "enabled": true,
                            "mtu": 1500
                        },
                        "unnumbered": {
                            "config": {
                                "enabled": false
                            },
                            "state": {
                                "enabled": false
                            }
                        }
                    }
! The output is truncated for brevity
                }
            }
        }
    }]
}

Because RESTCONF works over HTTP, you can use the handy Linux tool cURL as shown in this example. The -X key sets the HTTP message type to GET, which means you collect the information already configured on the network function. Then, using the key --header (or -H), you provide the authorization data, including the credentials encoded in Base64 format. You use the --insecure key to avoid the certificate check, which you typically want to do with self-signed certificates. You can also see in this example that the output of the curl command is the input for the Python module json.tool. This makes the output format easy to read; without this command, the JSON output of curl is just a single string. Despite the fact that this string has all the correct JSON framing, it’s difficult to read.

Instead of using cURL, you can use any other tool that allows you to work with a REST API. One of the most popular tools in this area is Postman, which provides a flexible way to generate the REST API commands and receive responses. (Postman is beyond the scope of this book.)

In addition to collecting data by using RESTCONF, it’s also possible to create a new configuration, as discussed in Chapter 14. Although it’s theoretically possible to create a shell script that uses cURL to provision a network device, this is not necessarily the best way of managing network devices over RESTCONF. Example 18-19 provides some ideas about how to create a Python script to manage the network function with a REST API.

Example 18-19 Python Script to Manage Network Elements over RESTCONF

$ cat restconf_put.py
import sys
import requests
from requests.auth import HTTPBasicAuth
# Variables NF_USERNAME = 'aaa' NF_PASSWORD = 'aaa' NF_RESTPORT = 6020
# User functions def set_interfaces(NF_HOSTNAME, NEW_INTERFACE_NAME, NEW_INTERFACE_IP): """ Setting the interfaces in RESTCONF """ templated_stuff = """{ "openconfig-interfaces:name": "%s", "openconfig-interfaces:config": { "enabled": true, "name": "%s", "loopback-mode": true, "type": "softwareLoopback" }, "openconfig-interfaces:subinterfaces": { "subinterface": [ { "index": 0, "config": { "enabled": true, "index": 0 }, "openconfig-if-ip:ipv4": { "addresses": { "address": [ { "ip": "%s", "config": { "ip": "%s", "prefix-length": %s } } ] } } } ] } }""" % (NEW_INTERFACE_NAME, NEW_INTERFACE_NAME,
NEW_INTERFACE_IP.split('/')[0], NEW_INTERFACE_IP.split('/')[0],
NEW_INTERFACE_IP.split('/')[1])
url = "https://%s:%s/restconf/data/openconfig-interfaces:interfaces/interface=%s" % (NF_HOSTNAME, NF_RESTPORT, NEW_INTERFACE_NAME) set_rest = requests.put(url, auth=HTTPBasicAuth(NF_USERNAME, NF_PASSWORD), verify=False, data=templated_stuff)
print (set_rest)
# Main body
if __name__ == "__main__": set_interfaces(sys.argv[1], sys.argv[2], sys.argv[3])

This script is quite similar in structure to Example 18-15. At the beginning of the script, some modules are imported, and they add the function necessary for this script. The module sys, which was also used earlier, makes it possible to send variables as arguments in the CLI, and the module requests is in charge of interacting with the HTTP API used in RESTCONF. The requests module has a built-in ability to convert your credentials into Base64 encoding, so you don’t need to do this conversion manually.

When the modules are imported, variables such as the credentials and RESTCONF port are provided. You can also pass these variables by using the CLI if you like. If you have the same credentials for all your devices, by storing them in the Python file, you can avoid entering them each time, and you can still provide the hostname of the target managed network device.

After the block with variables, you see the user-defined functions. This is a convenient way to structure the code because the user functions help make the code reusable. Although this structure is not needed in this small script, applying it is a good habit. Within the user-defined function set_interfaces, you can see the template for the interface followed by the variables. The template contains JSON data that is later used as a payload for a PUT message. The tricky part with the payload is that you need to provide the IP address and the subnet mask separately. Earlier in this chapter, you saw the IP address and the subnet mask together in a single line; in the OpenConfig YANG data model for interfaces, they need to appear separately. To achieve that, you use the function split('/'), which divides a single variable into multiple parts, using the separator provided in single quotes. When the split is complete, you can call the parts of the variable by using an index [x] starting from 0 for the leftmost split value.

When the template is filled in with variables, the payload for the REST API call is ready, and you can make the call by using the requests module. The type of the request is specified as requests.put, and you can change this request to switch PUT to POST, PATCH, or GET, depending on your needs. Within the REST API request, you need to provide the URI, which is also a template filled in with variables, the authorization data, and the payload (in this case, data). Because a self-signed certificate is used, you should disable the certificate check by using the key/value pair verify=false. The user-defined function ends by printing the output of the execution result.

The main body of the script calls the user-defined function and provides to it the variables that it receives from the CLI. Example 18-20 shows how the Python script is executed.

Example 18-20 Execution of a Python Script to Configure the Network Function over RESTCONF

$ python restconf_put.py EOS1 Loopback456 192.168.192.168/32
/home/aaa/book/non_cisco_prog/python_restconf/venv/lib/python3.7/site-
  packages/urllib3/connectionpool.py:851: InsecureRequestWarning: Unverified
  HTTPS request is being made. Adding certificate verification is strongly advised.
  See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning)
<Response [200]>

This example launches the Python script the same way it is launched for NETCONF (refer to Example 18-15). In the output shown here, you can see that the certificate isn’t verified. This is okay because you manually disabled this check in Example 18-19. However, in a production network, you should use PKI and centralized certificates to avoid security issues. The second part of the output shows the HTTP message code, which in this case is 200, which means OK. Now, by using the REST API GET call provided in Example 18-18, you verify the status of the newly created interface.

Summary

This chapter provides an overview of the main APIs for configuring network functions with the shell. It also provides examples of using Python to manage non-Cisco operating systems. The following are some of the specifics covered in this chapter:

  • Programmability has become important in the networking industry, and many vendors have become involved with programmability.

  • To begin automating your network, it is important to determine which APIs your vendor supports.

  • The most popular programmable interfaces today are NETCONF, RESTCONF, and gRPC. However, you can use the CLI for programmability by using scripts.

  • To successfully apply automation, you need to understand what variable will vary across different network devices’ management. Based on that understanding, you can develop an API.

  • The most popular and influential scripting language is Python. Nevertheless, you can use Bash, Perl, or any other scripting language that suits a particular solution.

  • NETCONF, RESTCONF, and gRPC are built around a YANG data model, and they are basically just different transport and encoding protocols. You should check a vendor’s supported YANG modules before you start developing automation.

  • Sometimes modules aren’t published. In such a case, you need to either ask your vendor for them or to do reverse engineering to extract the configuration from the network function over NETCONF/RESTCONF and analyze the YANG module.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.185.180