The Ryu controller with Python

Ryu is a component-based SDN controller fully written in Python. It is a project backed by Nippon Telegraph and Telephone (NTT) Labs. The project has Japanese roots; Ryu means "flow" in Japanese and is pronounced "ree-yooh" in English, which matches well with the OpenFlow objective of programming flow in network devices. By being component based, the Ryu framework breaks the software it uses into individual portions that can be taken in part or as a whole. For example, on your virtual machine, under /home/ubuntu/ryu/ryu, you can see several folders, including app, base, and ofproto:

ubuntu@sdnhubvm:~/ryu/ryu[00:04] (master)$ pwd
/home/ubuntu/ryu/ryu
ubuntu@sdnhubvm:~/ryu/ryu[00:05] (master)$ ls
app/ cmd/ exception.pyc __init__.py log.pyc topology/
base/ contrib/ flags.py __init__.pyc ofproto/ utils.py
cfg.py controller/ flags.pyc lib/ services/ utils.pyc
cfg.pyc exception.py hooks.py log.py tests/

Each of these folders contains different software components. The base folder contains the app_manager.py file, which loads the Ryu applications, provides contexts, and routes messages among Ryu applications. The app folder contains various applications that you can load using the app_manager, such as layer 2 switches, routers, and firewalls. The ofproto folder contains encoders/decoders as per OpenFlow standards: version 1.0, version 1.3, and so on. By focusing on the components, it is easy to only pick the portion of the package that you need. Let's look at an example. First, let's fire up our Mininet network:

$ sudo mn --topo single,3 --mac --controller remote --switch ovsk
...
*** Starting CLI:
mininet>

Then, in a separate terminal window, we can run the simple_switch_13.py application by placing it as the first argument after ryu-manager. Ryu applications are simply Python scripts:

$ cd ryu/
$ ./bin/ryu-manager ryu/app/simple_switch_13.py
loading app ryu/app/simple_switch_13.py
loading app ryu.controller.ofp_handler
instantiating app ryu/app/simple_switch_13.py of SimpleSwitch13
instantiating app ryu.controller.ofp_handler of OFPHandler

You can run a quick check between the two hosts in Mininet to verify reachability:

mininet> h1 ping -c 2 h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=8.64 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.218 ms

--- 10.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1003ms
rtt min/avg/max/mdev = 0.218/4.430/8.643/4.213 ms
mininet>

You can also see a few events happening on your screen where you launched the switch application:

packet in 1 00:00:00:00:00:01 ff:ff:ff:ff:ff:ff 1
packet in 1 00:00:00:00:00:02 00:00:00:00:00:01 2
packet in 1 00:00:00:00:00:01 00:00:00:00:00:02 1

As you would expect, host 1 broadcasts the request, to which host 2 answers. Then host 1 will proceed with sending the ping packet out to host 2. Notice that we have successfully sent two ICMP packets from host 1 to host 2; however, the controller is only seeing 1 packet being sent across. Also notice that the first ping packet took about 8.64 ms, while the second ping packet took about 0.218 ms, almost a 40x improvement! Why is that? What was the cause of the speed improvement?

You may have already guessed it if you have read the OpenFlow specification or have programmed a switch before. If you put yourself in the switch's position--be the switch, if you will--when you received the first broadcast packet, you wouldn't know what to do with the packet. This is considered a table miss because there is no flow in any of the tables. If you recall, we have three options for table misses; the two obvious options would be to either drop the packet or to send it to the controller and wait for further instructions. As it turns out, simple_switch_l3.py installed a flow action for flow misses to be sent to the controller, so the controller would see any table miss as a PacketIn event:

 actions = [parser.OFPActionOutput(ofproto.OFPP_CONTROLLER,
ofproto.OFPCML_NO_BUFFER)]
self.add_flow(datapath, 0, match, actions)

That explains the first three packets, since they were unknown to the switch. The second question we had was why did the controller not see the second ICMP packet from h1 to h2? As you would have already guessed, by the time we saw the MAC address from h2 to h1, we would have had enough information to install the flow entries. Therefore, when the second ICMP from h1 to h2 arrives at the switch, the switch already has a flow match and no longer needs to send the packet to the controller. This is why the second ICMP had a dramatic time improvement: most of the delay in the first packet was caused by the controller processing and communication.

This brings us to a good point to to discuss some of the Open vSwitch commands that we can use to interact with the switch directly.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.163.238