Using OpenDaylight’s REST API from Go

Its been a while since I’ve had a chance to play with OpenDaylight. Since I’ve been on a Go kick lately I decided to try and interact with ODL with Go. Using Go’s built-in JSON decoder, its pretty easy to grab information from OpenDaylight. Here’s a quick example of accessing the topology API using Go.

First I defined the main package since I’ll just be running this single script, and import the necessary libraries in lines 1-9.

Next I define a bunch of structs that match the JSON API in lines 11-67

Now I can put it to work, lets step through the main function.

I start with building the URL to grab topology information using a base URL string, then joining it with the directory. I print out the URL just as a check.

func main() {
  baseurl := ""
  // The URL to get the topology of the default slice
  url := strings.Join([]string{baseurl, "topology/default"}, "/")

Next I do an HTTP GET. The return value is the response and an error variable, which I check before moving on.

resp, err := http.Get(url)
if err != nil {

Now I take the response, read in the body, and store it in contents.

contents, err := ioutil.ReadAll(resp.Body)

And finally I create an EdgeProperties variable based on the structs above to store the JSON data into, and unmarshal the contents into that variable. Once that is done, I am able to access the various fields of the struct and print them out.

var e EdgeProperties
err = json.Unmarshal(contents, &e)

Now you can run it using go run odl.go.

Go is a fun language to play with. Even though it seems like Python is the preferred network scripting language, its still interesting to see how things work in something like Go. I’d like to keep expanding this to build an OpenDaylight API Go library that could make it easier to use it as an alternative language for interacting with ODL. I have a Github project setup here if you’re interested:

Vagrant with OpenDaylight

As I so often do, I took some inspiration from one of Brent’s posts on setting up OpenDaylight:

And decided to try it out with Vagrant. For those unfamiliar, Vagrant is a tool to make reproducible VMs. By default it works with VirtualBox, but can be configured to work with other virtualization technologies as well. To get started you download and install Vagrant for your OS (Mac, Win, Linux are all supported):
And the same goes for VirtualBox if you don’t already have it:
Once you’ve got them installed, you can copy someone’s Vagrant file, and run vagrant up in the directory to bring up the VM with the configuration specified in the file. I’ve posted my OpenDaylight Vagrant file on github:
So for this example you can clone it:

git clone

Then cd to the directory and type:

vagrant up

This will download create a VM with Ubuntu, install all the packages needed, and then finally startup the OpenDaylight controller. The whole process will take a few minutes, but when its all done you should be able to browse to the controller by going to http://localhost:8080 since I’ve port forwarded port 8080 to the VM from localhost. I’ve also forwarded ports 6633 and 8090 for OpenFlow and the OSGI console respectively. To access the OSGI console you can telnet localhost 8090. You can also ssh to the VM by issuing a vagrant ssh. Another cool thing is that the directory you started the vm from is shared with the VM, so you can copy files back and forth. Check out the vagrant docs for more details.

If anyone has any suggestions on things to add the the Vagrantfile, please feel free to modify and push to the repo. I left out mininet, but might add it to a separate version. I hope people find this useful. I think it is a great way to be able to easily distribute and replicate VM configs. Another nice thing with this setup is that it always downloads a fresh copy of ODL, so you’re running with the latest code each time. No need to maintain a static VM image.

ODL subnet REST API, Python, and some error handling

Its been a while since I’ve had a chance to play around on OpenDaylight for a while, so I thought I’d warm up with some Python and API calls. One thing I haven’t done much of with my code so far is handling errors, so in this post I’m going to access the Subnet API, and try to do some error handling as well from Python.

I recently found a nice HTTP client called Requests that simplifies the HTTP requests. You can add it using pip:

pip install requests

And then import it into your Python code:

import requests
from requests.auth import HTTPBasicauth

I’ve also imported the auth stuff for logging into the controller.
Now making a GET requests is simple, here is an example of querying the controller for all the configured subnets:

user = 'admin'
password = 'admin'
servierIP = ''
container = 'default'
allSubnets = '/controller/nb/v2/subnet/' + container + '/subnet/all'
url = 'http://' + serverIP + ':' + port + allSubnets
r = requests.get(url, auth=(user, password))
print r.json()

Requests can automatically take the response from the controller and give you back the JSON data. Now lets add some error handling by wrapping everything with try/except/else:

errorcodes = {
    400: 'Invalid data',
    401: 'User not authorized',
    409: 'Name conflict',
    404: 'Container name not found',
    500: 'Internal error',
    503: 'Service unavailable'

    r = requests.get(url, auth=(user, password))
except requests.exceptions.HTTPError as e:
    print e
    print "Reason : %s" % errorcodes[r.status_code]
    # No errors loading URL
    result = find_subnet(r.json()['subnetConfig'], subnetquery)
    print result

The first thing we do is use r.raise_for_status() which returns null if all goes well, otherwise it will raise an exception. One of the exceptions that can be raised is an HTTPError, in which case we’ll print out the error.

If everything is ok, then I pass the resulting JSON to a find_subnet function which just searches the JSON list for a particular subnet:

# given a list of subnets and a subnet to find, will return the subnet if found
# or None if not found
def find_subnet(subnets, subnetName):
    for subnet in subnets:
        if subnet['subnet'] == subnetName:
            return subnet
    return None

Nothing too crazy here, but just another Python example for those who are interested. Also, please note that the APIs have changed here and there, so be sure to check the URLs you are calling to make sure they are correct. I’ve found that the API docs on the OpenDaylight wiki are not always in sync with the version of controller I’m using. You can find the full script here:

Handling packets on the OpenDaylight controller

One of the actions that an OpenFlow switch can take is to punt a packet to the controller.  This example will take a look at how we can see those packets, and do something with them.  I hope to follow this up with another post that does something more exciting, but for now I’ll just try to print out what type of packet it is.  This is one of the things that (as far as I know)  you would only be able to do with an OSGi module, and is not available via the REST API.

First we create our Maven pom.xml with the required imports. In this case we’ll need some parts of the SAL and switchmanager:

Now we can create our activator. The key ingredient here is to register for callbacks from the Data Packet Service via OSGi in our public void configureInstance method:

                    "setDataPacketService", "unsetDataPacketService")

This ties into methods that we implement in our GetPackets class:

    void setDataPacketService(IDataPacketService s) {
        this.dataPacketService = s;

    void unsetDataPacketService(IDataPacketService s) {
        if (this.dataPacketService == s) {
            this.dataPacketService = null;

We make the class implement the IListenDataPacket interface to get notified of packets received on the controller:

public class GetPackets implements IListenDataPacket

And we override the public PacketResult receiveDataPacket(RawPacket inPkt) method:

    public PacketResult receiveDataPacket(RawPacket inPkt) {
        if (inPkt == null) {
            return PacketResult.IGNORED;
        log.trace("Received a frame of size: {}",
        Packet formattedPak = this.dataPacketService.decodeDataPacket(inPkt);
        if (formattedPak instanceof Ethernet) {
            Object nextPak = formattedPak.getPayload();
            if (nextPak instanceof IPv4) {
                IPv4 ipPak = (IPv4)nextPak;
                log.trace("Handled IP packet");
                int sipAddr = ipPak.getSourceAddress();
                InetAddress sip = NetUtils.getInetAddress(sipAddr);
                int dipAddr = ipPak.getDestinationAddress();
                InetAddress dip = NetUtils.getInetAddress(dipAddr);
                System.out.println("SRC IP:");
                System.out.println("DST IP:");

                Object frame = ipPak.getPayload();
                if (frame instanceof ICMP) {
                    System.out.println("ICMP from instance");
                String protocol = IPProtocols.getProtocolName(ipPak.getProtocol());
                if (protocol == IPProtocols.ICMP.toString()) {
                    ICMP icmpPak = (ICMP)ipPak.getPayload();
                    System.out.println("ICMP from checking protocol");
                    handleICMPPacket((Ethernet) formattedPak, icmpPak, inPkt.getIncomingNodeConnector());
        return PacketResult.IGNORED;

You’ll notice that we can keep going into the different payloads of the frame/packet to get to the next network layer. However, using instanceof can be slow, so an alternative is to pull out the protocol field, and do a comparison. In my example I’ve specifically handled ICMP packets, and used both methods for determining if the IP packet is ICMP.

Unit Testing OpenDaylight code with Mininet and Python

I recently got pinged by Dale Carder from the University of Wisconsin regarding a python API he is developing for ODL.  The API is a nice step in relieving some of the tedium when dealing with the ODL REST API.
One of the cool things he’s done as part of his project is create some unit tests for his code using the python API for mininet, coupled with his Python API code.  Unit tests are a great way to make sure the code you’re creating does what it should do, and keeps doing the right things when you make changes. They are key to practices such as Test Driven Development(TDD). Combining Mininet API calls with ODL API calls could be a powerful tool for creating network applications. Let’s take a closer look and see what he’s done to leverage that Mininet API:

First he creates a class for to define the topology:

class SingleSwitchTopo(Topo):
    "Single switch connected to n hosts."
    def __init__(self, n=2, **opts):
        # Initialize topology and default options
        Topo.__init__(self, **opts)
        # mininet/ovswitch does not want ':'s in the dpid
        switch_id = SWITCH_1.translate(None, ':')
        switch = self.addSwitch('s1', dpid=switch_id)
        for h in range(n):
            host = self.addHost('h%s' % (h + 1))
            self.addLink(host, switch)

This gives us a Mininet instance with a switch with hosts that can be used to test the API calls.

Next he starts up the test network:

def setup_mininet_simpleTest():
    "Create and test a simple network"
    topo = SingleSwitchTopo(n=4)
    #net = Mininet(topo)
    net = Mininet( topo=topo, controller=lambda name: RemoteController( 
                   name, ip=CONTROLLER ) )

Then finally he setups the tests and tests the API calls that he makes. Here is the setup and one of the test cases:

class TestSequenceFunctions(unittest.TestCase):
    """Tests for OpenDaylight

       At this point, tests for OpenDaylightFlow and OpenDaylightNode
       are intermingled.  These could be seperated out into seperate

    def setUp(self):
        odl = OpenDaylight()
        odl.setup['hostname'] = CONTROLLER
        odl.setup['username'] = USERNAME
        odl.setup['password'] = PASSWORD
        self.flow = OpenDaylightFlow(odl)
        self.node = OpenDaylightNode(odl)

        self.switch_id_1 = SWITCH_1

        self.odl_test_flow_1 = {u'actions': u'DROP',
           u'etherType': u'0x800',
           u'ingressPort': u'1',
           u'installInHw': u'true',
           u'name': u'odl-test-flow1',
           u'node': {u'@id': self.switch_id_1, u'@type': u'OF'},
           u'priority': u'500'}

        self.odl_test_flow_2 = {u'actions': u'DROP',
           u'etherType': u'0x800',
           u'ingressPort': u'2',
           u'installInHw': u'true',
           u'name': u'odl-test-flow2',
           u'node': {u'@id': self.switch_id_1, u'@type': u'OF'},
           u'priority': u'500'}

    def test_01_delete_flows(self):
        """Clean up from any previous test run, just delete these
            flows if they exist.


You can find the beginnings of his API here:
please keep in mind its still very early goings, and a work in progress. Thanks goes out to Dale for letting me borrow his code for this post.

Adding flows in OpenDaylight using Python and REST API

Building on my last post I’ve put together a small script that will find the shortest path between two switches, then install some flows to create a path between them. I’m leveraging some tools from the NetworkX library in Python and the Northbound REST API on the OpenDaylight controller.

I’m using Mininet as my test network, and the built-in tree topology that was used previously. When loaded into the controller the topology looks like this:
Tree3 Topo
Mininet creates two hosts off each leaf node. The end goal here is to ping between H1 and H8, which are connected to S3 (port 1) and S7 (port 2). First I want to get the shortest path between S3 and S7. To do that I load the nodes(switches) and edges(links) into a NetworkX Graph object:

I wrote a helper function to make building the URLs for the REST calls a little easier. The URL path to get the edges is:/controller/nb/v2/topology/default/

This returns a list of ‘edgeProperties’. An edge has a headNodeConnector and a tailNodeConnector, which are the ports on either end of the edge (each edge is unidirectional, so there will be two entries for each link). These NodeConnectors have nodes associated with them that I use to create tuple to signify an edge. For example, the edge between S4 and S2 has a headNodeConnector of:

            "edge": {
                "tailNodeConnector": {
                    "@type": "OF",
                    "@id": "2",
                    "node": {
                        "@type": "OF",
                        "@id": "00:00:00:00:00:00:00:02"
                "headNodeConnector": {
                    "@type": "OF",
                    "@id": "3",
                    "node": {
                        "@type": "OF",
                        "@id": "00:00:00:00:00:00:00:04"

From which I extract the node id’s (i.e. OpenFlow IDs) to create a tuple:
(00:00:00:00:00:00:00:04, 00:00:00:00:00:00:00:02)
And then add that as an edge to my Graph object. You’ll also see the ID of the NodeConnector itself as one of the attributes, which we’ll use later when defining ingress and output ports for our flows. Here’s code:

# Get all the edges/links
resp, content = h.request(build_url(baseUrl, 'topology', containerName), "GET")
edgeProperties = json.loads(content)
odlEdges = edgeProperties['edgeProperties']

# Put nodes and edges into a graph
graph = nx.Graph()
for edge in odlEdges:
  e = (edge['edge']['headNodeConnector']['node']['@id'], edge['edge']['tailNodeConnector']['node']['@id'])

I do something similar with the nodes, using the URL path: /controller/nb/v2/switch/default/nodes
That returns a list of all the switches in the network, and I add them as nodes to the graph.

# Get all the nodes/switches
resp, content = h.request(build_url(baseUrl, 'switch', containerName) + '/nodes/', "GET")
nodeProperties = json.loads(content)
odlNodes = nodeProperties['nodeProperties']
for node in odlNodes:

Now I run Djikstra’s algorithm provided by the shortest_path method. This returns a list of nodes along the shortest path.

shortest_path = nx.shortest_path(graph, "00:00:00:00:00:00:00:03", "00:00:00:00:00:00:00:07")

In this case the list is:
[00:00:00:00:00:00:00:03, 00:00:00:00:00:00:00:02, 00:00:00:00:00:00:00:01, 00:00:00:00:00:00:00:05, 00:00:00:00:00:00:00:07]

Now I can build a series of flow entries for each switch along the path, not counting the ones directly connected to the hosts. To do this I created a couple functions. find_edge will look for an edge that has a particular head node and tail node. push_path takes a path and the edges from the API call and pushes the appropriate flows to the switches. To create a flow on the switch I use an HTTP POST and send over a JSON object that describes the flow. An example JSON flow entry object:

{"installInHw":"false","name":"test2","node":{"@id":"00:00:00:00:00:00:00:07","@type":"OF"}, "ingressPort":"1","priority":"500","etherType":"0x800","nwSrc":"","nwDst":"", "actions":"OUTPUT=2"}

And the code:

def find_edge(edges, headNode, tailNode):
  for edge in odlEdges:
    if edge['edge']['headNodeConnector']['node']['@id'] == headNode and edge['edge']['tailNodeConnector']['node']['@id'] == tailNode:
      return edge
  return None

def push_path(path, odlEdges, srcIP, dstIP, baseUrl):
  for i, node in enumerate(path[1:-1]):
    flowName = "fromIP" + srcIP[-1:] + "Po" + str(i)
    ingressEdge = find_edge(odlEdges, shortest_path[i], node)
    egressEdge = find_edge(odlEdges, node, shortest_path[i+2])
    newFlow = build_flow_entry(flowName, ingressEdge, egressEdge, node, srcIP, dstIP)
    switchType = newFlow['node']['@type']
    postUrl = build_flow_url(baseUrl, 'default', switchType, node, flowName)
    # post the flow to the controller
    resp, content = post_dict(h, postUrl, newFlow)

def build_flow_entry(flowName, ingressEdge, egressEdge, node, srcIP, dstIP):
  # Since I don't specify the EtherType, it looks like the IP field is ignored
  # Alternatively I could add a second flow with 0x806 for ARP then 0x800 for IP
  defaultPriority = "500"
  newFlow = {"installInHw":"false"}
  ingressPort = ingressEdge['edge']['tailNodeConnector']['@id']
  egressPort = egressEdge['edge']['headNodeConnector']['@id']
  switchType = egressEdge['edge']['headNodeConnector']['node']['@type']
  newFlow.update({"ingressPort":ingressPort, "priority":defaultPriority})
  newFlow.update({"nwSrc":srcIP, "nwDst":dstIP})  # This can probably be ignored for this example
  newFlow.update({"actions":"OUTPUT=" + egressPort})
  return newFlow

def post_dict(h, url, d):
  resp, content = h.request(
      uri = url,
      method = 'POST',
      headers={'Content-Type' : 'application/json'},
  return resp, content

A few things to note:

  • I didn’t specify an EtherType so that it will send any packet over the links. I needed to do this so that ARP messages would make it across. But in doing so, the IP addresses are ignored. This works fine for my simple example, but I think I may revise this later to handle both ARP and IP separately so that there are not such broad flow entries installed. Then I could specify IP match criteria.
  • I chose to not install the flow to the switch immediately so that I can take a look at them before actually putting them to work (InstallInHw : false)
  • I’m not yet error checking on the response, but you should get an HTTP 201 for successful flow entries.

After pushing these flows, I reverse the path, and push the reverse path onto the switches:

push_path(shortest_path, odlEdges, dstIP, srcIP, baseUrl)

Finally I add the entries for the leaf nodes connected to the hosts. There is probably a way to do this with host tracker to make this dynamic, but I just hardcoded it for now:

node3FlowFromHost = {"installInHw":"false","name":"node3from","node":{"@id":"00:00:00:00:00:00:00:03","@type":"OF"},"ingressPort":"1","priority":"500","nwSrc":"","actions":"OUTPUT=3"}
node7FlowFromHost = {"installInHw":"false","name":"node7from","node":{"@id":"00:00:00:00:00:00:00:07","@type":"OF"},"ingressPort":"2","priority":"500","nwSrc":"","actions":"OUTPUT=3"}
node3FlowToHost = {"installInHw":"false","name":"node3to","node":{"@id":"00:00:00:00:00:00:00:03","@type":"OF"},"ingressPort":"3","priority":"500","nwDst":"","actions":"OUTPUT=1"}
node7FlowToHost = {"installInHw":"false","name":"node7to","node":{"@id":"00:00:00:00:00:00:00:07","@type":"OF"},"ingressPort":"3","priority":"500","nwDst":"","actions":"OUTPUT=2"}
postUrl = build_flow_url(baseUrl, 'default', "OF", "00:00:00:00:00:00:00:03", "node3from")
resp, content = post_dict(h, postUrl, node3FlowFromHost)
postUrl = build_flow_url(baseUrl, 'default', "OF", "00:00:00:00:00:00:00:07", "node7from")
resp, content = post_dict(h, postUrl, node7FlowFromHost)
postUrl = build_flow_url(baseUrl, 'default', "OF", "00:00:00:00:00:00:00:03", "node3to")
resp, content = post_dict(h, postUrl, node3FlowToHost)
postUrl = build_flow_url(baseUrl, 'default', "OF", "00:00:00:00:00:00:00:07", "node7to")
resp, content = post_dict(h, postUrl, node7FlowToHost)

Ok, after all that, I can run my script and then go to the controller to see all the flows added.
Now I install them into the switches by clicking on a flow and then ‘Install Flow’.
After the flows have all been installed, I can then try out my pings in mininet:
Mininet Ping 1
And just for good measure I can go over to the Troubleshooting tab on the controller to see the stats of these flows, and see how packets are matching on my flow entries.
You can find the complete script on Github: