wiki:2012/Projects/OpenFlowMF

Version 11 (modified by aravind, 5 years ago) (diff)

--

Project Participant: Aravind Krishnamoorthy

Email: aravind.k90@…

Project Objective:

To implement MobilityFirst's storage aware routing protocol "GSTAR" using OpenFlow?.

Motivation:

Traditionally, routing protocols are implemented as distributed algorithms running on several network devices that communicate with each other to keep the routing information converged. This has certain drawbacks such as the time it takes to recompute routes in case of link failures and the performance limitation of software implementations (such as ones using Click). However, if we can program the switching fabric to route packets based on rules that conform to the desired routing protocol, then a much higher throughput can be achieved. This is the idea behind OpenFlow?. A central controller which can see a map of the entire network, runs the routing algorithms, and installs appropriate flow rules on the switches to make them act like routers. Hence the architecture is one where a central intelligent controller defines how packets are handled by several dumb network elements, instead of the traditional method of using several intelligent network devices working in conjunction with each other. In this way, we can not only implement routing protocols on a switch, but also program the controller such that back up flows are installed immediately if links or devices in the network go down.

OpenFlow Architecture

Limitations of OpenFlow? v1.0 Standards:

The OpenFlow? version 1.0, that most hardware switches and controllers currently support, has severe limitations on the number of header fields that the switch can match on each incoming packet. Specifically, the layer 3 fields that can be matched are source and destination IP address, ToS bits and the protocol field, amounting to a total of less than 10 bytes. However, the MobilityFirst layer 3 header has fields that add up to a minimum of 48 bytes in the first packet of a chunk, and 12 bytes for the subsequent packets in a chunk. Moreover, for the layer 3 fields to be used in a flow rule at the switch, the ether type of the packet has to match the corresponding upper layer protocol; for example, for IPv4 fields such as source and destination network address to be used in a flow rule, the ether type of the packet has to be 0x0800. In order to overcome these limitations we set up flows on the switch as described below.

Flow Setup:

At the controller, there are no limitations on the number of header fields that can be accessed for computing the route. Hence, the trivial case is to send each packet to the controller and let the controller decide how the packet should be forwarded. However, this will not scale well, since the controller might get overwhelmed with traffic when the number of OpenFlow? switches under its control increases. Moreover, when a MobilityFirst chunk is being transmitted as multiple packets, only the first packet has the routing header. Hence, there is no incentive to send all packets to the controller. Additionally, all packets corresponding a chunk have the same Hop ID. Hence, the first packet of each chunk can be sent to the controller, and once the controller computes the out bound port for that packet, a flow rule can be set up on the switch which says,

Hop ID = x and Source MAC Address = y => Out bound Port = z

However, since the switch will not understand what Hop ID is, we need to set up flows using only layer 2 header fields (the ether type of MobilityFirst packets being 0x27C0 prevents us from using any of the higher layer fields at the switch). Hence, we insert the Hop ID of each packet into the Vlan tag field of the layer 2 header. This way, the switch can look at the source MAC address, and the Vlan tag and decide which port the packet has to be forwarded through. The rule now looks like,

VLAN ID = x and Source MAC Address = y => Out bound Port = z

MobilityFirst Header

Storage Aware Routing Implementation:

Under circumstances when storage of chunks is not required, the OpenFlow? switch can simply look at the header fields and forward the packets out through approprite ports. However, when chunks have to be stored for a certain amount of time due to various reasons (such as the down link being broken temporarily), the packets will have to be forwarded to a different location, because, unlike a router, the switch does not have any storage. There are different ways to implement this functionality, and they are outlined below.

MF Router Hanging off the OF Switch:

MF Router Hanging off the OF Switch

The OpenFlow? switch has a MobilityFirst router connected to it. As long as chunks need not be stored for any purpose, this router is never used and the switch takes care of forwarding packets. However, when a chunk needs to be stored, possibly because the link to the destination goes down, the switch can forward the packets to the router, which has storage functionality built into it. When the link comes up again, the packet can then be transmitted from the router to the destination. Additionally, while implementing this scheme, there is a chioce of making the MobilityFirst router transparent or visible to the source and destinatoin. Both of these methods have their own advantages and drawbacks as described below.

  • MobilityFirst Router Transparent: In this case, the source and destination see each other and whenever no storage is required, the OpenFlow? switch acts as a conventional layer 2 switch. When chunks have to stored for some reason, the switch will forward the packets to the MobilityFirst router, after making necessary changes to the layer 2 header (specifically, layer 2 destination) of the packets. This implementation is efficient in the sense that there is hardly any processing that needs to done (except for layer 2 forwarding) when the chunks need not be stored. The downside is that, if the link to the destination goes down, the source will stop receiving link probe messages and hence will stop transmitting packets. This issue will have to be worked around for this implementation to work.
  • MobilityFirst Router Visible: In this case, the source and destination see only the MobilityFirst router and not each other. Hence, the sender will always send packets to the router no matter what the state of the link the destination is. However, for every packet that does not have to be stored, the switch has to rewrite the layer 2 header and forward the packet directly to the destination instead of forwarding to the router. The storage case becomes trivial, as it is just conventinal layer 2 forwarding. The downside of this implementation is the need for rewriting the header fields of every packet that needs to be cut through, and this might impact the throughput achieved.

Performance Evaluation of Various OpenFlow? Actions:

Given that the difference between the above two methods lies in the number of header fields that have to be re-written in each packet, experiments were conducted on MININET and ORBIT to evaluate how this affects the throughput. The setup used was a single OpenFlow? switch with two nodes attached to it. iperf was then used to compute the throughput between these two nodes under different OpenFlow? actions.
Results Using MININET:
Only OFActionOutput (out port only): 2.85 Gbps
Re-writing Soure and destination MAC addresses of each packet: 2.74 Gbps
Results Using Pronto 3290:
Only OFActionOutput (out port only): 943 Mbps
Re-writing Source and destination MAC addresses of each packet: 943 Mbps

It can be seen that on a hardware switch where the OpenFlow? actions are done using TCAMs, increase in complexity of the actions does not cause any decrease in the throughput. Hence, having the MobilityFirst router visible (and having to re-write the header fields in a larger number of packets) does not have any drawbacks in terms of performance.

Results & Performance Evaluation:

Results:

Attachments (5)

Download all attachments as: .zip