ODL subnet REST API, Python, and some error handling

Its been a while since I’ve had a chance to play around on OpenDaylight for a while, so I thought I’d warm up with some Python and API calls. One thing I haven’t done much of with my code so far is handling errors, so in this post I’m going to access the Subnet API, and try to do some error handling as well from Python.

I recently found a nice HTTP client called Requests that simplifies the HTTP requests. You can add it using pip:

pip install requests

And then import it into your Python code:

import requests
from requests.auth import HTTPBasicauth

I’ve also imported the auth stuff for logging into the controller.
Now making a GET requests is simple, here is an example of querying the controller for all the configured subnets:

user = 'admin'
password = 'admin'
servierIP = ''
container = 'default'
allSubnets = '/controller/nb/v2/subnet/' + container + '/subnet/all'
url = 'http://' + serverIP + ':' + port + allSubnets
r = requests.get(url, auth=(user, password))
print r.json()

Requests can automatically take the response from the controller and give you back the JSON data. Now lets add some error handling by wrapping everything with try/except/else:

errorcodes = {
    400: 'Invalid data',
    401: 'User not authorized',
    409: 'Name conflict',
    404: 'Container name not found',
    500: 'Internal error',
    503: 'Service unavailable'

    r = requests.get(url, auth=(user, password))
except requests.exceptions.HTTPError as e:
    print e
    print "Reason : %s" % errorcodes[r.status_code]
    # No errors loading URL
    result = find_subnet(r.json()['subnetConfig'], subnetquery)
    print result

The first thing we do is use r.raise_for_status() which returns null if all goes well, otherwise it will raise an exception. One of the exceptions that can be raised is an HTTPError, in which case we’ll print out the error.

If everything is ok, then I pass the resulting JSON to a find_subnet function which just searches the JSON list for a particular subnet:

# given a list of subnets and a subnet to find, will return the subnet if found
# or None if not found
def find_subnet(subnets, subnetName):
    for subnet in subnets:
        if subnet['subnet'] == subnetName:
            return subnet
    return None

Nothing too crazy here, but just another Python example for those who are interested. Also, please note that the APIs have changed here and there, so be sure to check the URLs you are calling to make sure they are correct. I’ve found that the API docs on the OpenDaylight wiki are not always in sync with the version of controller I’m using. You can find the full script here:

Handling packets on the OpenDaylight controller

One of the actions that an OpenFlow switch can take is to punt a packet to the controller.  This example will take a look at how we can see those packets, and do something with them.  I hope to follow this up with another post that does something more exciting, but for now I’ll just try to print out what type of packet it is.  This is one of the things that (as far as I know)  you would only be able to do with an OSGi module, and is not available via the REST API.

First we create our Maven pom.xml with the required imports. In this case we’ll need some parts of the SAL and switchmanager:

Now we can create our activator. The key ingredient here is to register for callbacks from the Data Packet Service via OSGi in our public void configureInstance method:

                    "setDataPacketService", "unsetDataPacketService")

This ties into methods that we implement in our GetPackets class:

    void setDataPacketService(IDataPacketService s) {
        this.dataPacketService = s;

    void unsetDataPacketService(IDataPacketService s) {
        if (this.dataPacketService == s) {
            this.dataPacketService = null;

We make the class implement the IListenDataPacket interface to get notified of packets received on the controller:

public class GetPackets implements IListenDataPacket

And we override the public PacketResult receiveDataPacket(RawPacket inPkt) method:

    public PacketResult receiveDataPacket(RawPacket inPkt) {
        if (inPkt == null) {
            return PacketResult.IGNORED;
        log.trace("Received a frame of size: {}",
        Packet formattedPak = this.dataPacketService.decodeDataPacket(inPkt);
        if (formattedPak instanceof Ethernet) {
            Object nextPak = formattedPak.getPayload();
            if (nextPak instanceof IPv4) {
                IPv4 ipPak = (IPv4)nextPak;
                log.trace("Handled IP packet");
                int sipAddr = ipPak.getSourceAddress();
                InetAddress sip = NetUtils.getInetAddress(sipAddr);
                int dipAddr = ipPak.getDestinationAddress();
                InetAddress dip = NetUtils.getInetAddress(dipAddr);
                System.out.println("SRC IP:");
                System.out.println("DST IP:");

                Object frame = ipPak.getPayload();
                if (frame instanceof ICMP) {
                    System.out.println("ICMP from instance");
                String protocol = IPProtocols.getProtocolName(ipPak.getProtocol());
                if (protocol == IPProtocols.ICMP.toString()) {
                    ICMP icmpPak = (ICMP)ipPak.getPayload();
                    System.out.println("ICMP from checking protocol");
                    handleICMPPacket((Ethernet) formattedPak, icmpPak, inPkt.getIncomingNodeConnector());
        return PacketResult.IGNORED;

You’ll notice that we can keep going into the different payloads of the frame/packet to get to the next network layer. However, using instanceof can be slow, so an alternative is to pull out the protocol field, and do a comparison. In my example I’ve specifically handled ICMP packets, and used both methods for determining if the IP packet is ICMP.

Unit Testing OpenDaylight code with Mininet and Python

I recently got pinged by Dale Carder from the University of Wisconsin regarding a python API he is developing for ODL.  The API is a nice step in relieving some of the tedium when dealing with the ODL REST API.
One of the cool things he’s done as part of his project is create some unit tests for his code using the python API for mininet, coupled with his Python API code.  Unit tests are a great way to make sure the code you’re creating does what it should do, and keeps doing the right things when you make changes. They are key to practices such as Test Driven Development(TDD). Combining Mininet API calls with ODL API calls could be a powerful tool for creating network applications. Let’s take a closer look and see what he’s done to leverage that Mininet API:

First he creates a class for to define the topology:

class SingleSwitchTopo(Topo):
    "Single switch connected to n hosts."
    def __init__(self, n=2, **opts):
        # Initialize topology and default options
        Topo.__init__(self, **opts)
        # mininet/ovswitch does not want ':'s in the dpid
        switch_id = SWITCH_1.translate(None, ':')
        switch = self.addSwitch('s1', dpid=switch_id)
        for h in range(n):
            host = self.addHost('h%s' % (h + 1))
            self.addLink(host, switch)

This gives us a Mininet instance with a switch with hosts that can be used to test the API calls.

Next he starts up the test network:

def setup_mininet_simpleTest():
    "Create and test a simple network"
    topo = SingleSwitchTopo(n=4)
    #net = Mininet(topo)
    net = Mininet( topo=topo, controller=lambda name: RemoteController( 
                   name, ip=CONTROLLER ) )

Then finally he setups the tests and tests the API calls that he makes. Here is the setup and one of the test cases:

class TestSequenceFunctions(unittest.TestCase):
    """Tests for OpenDaylight

       At this point, tests for OpenDaylightFlow and OpenDaylightNode
       are intermingled.  These could be seperated out into seperate

    def setUp(self):
        odl = OpenDaylight()
        odl.setup['hostname'] = CONTROLLER
        odl.setup['username'] = USERNAME
        odl.setup['password'] = PASSWORD
        self.flow = OpenDaylightFlow(odl)
        self.node = OpenDaylightNode(odl)

        self.switch_id_1 = SWITCH_1

        self.odl_test_flow_1 = {u'actions': u'DROP',
           u'etherType': u'0x800',
           u'ingressPort': u'1',
           u'installInHw': u'true',
           u'name': u'odl-test-flow1',
           u'node': {u'@id': self.switch_id_1, u'@type': u'OF'},
           u'priority': u'500'}

        self.odl_test_flow_2 = {u'actions': u'DROP',
           u'etherType': u'0x800',
           u'ingressPort': u'2',
           u'installInHw': u'true',
           u'name': u'odl-test-flow2',
           u'node': {u'@id': self.switch_id_1, u'@type': u'OF'},
           u'priority': u'500'}

    def test_01_delete_flows(self):
        """Clean up from any previous test run, just delete these
            flows if they exist.


You can find the beginnings of his API here:

please keep in mind its still very early goings, and a work in progress. Thanks goes out to Dale for letting me borrow his code for this post.


Overlay and Underlay networks

I just read an article from Greg Ferro about Integerating Overlay and Physical Networks http://etherealmind.com/integrating-overlay-networking-and-the-physical-network/?utm_source=feedly&utm_medium=feed&utm_campaign=Feed%3A+etherealmind+%28My+Etherealmind+-+Network+design%2C+architecture%2C+thinking%2C+working.+Tech.%29 which is a nice compliment to an interesting demo I just saw at Cisco Live. The demo showed how the XNC can be used to manage an Overlay/Underlay network using OpenFlow to manage the traffic flows between OVS instances with GRE tunnels (overlay), and onePK to manage the physical network (underlay).  

For instance, you may have traffic that needs low latency, and other traffic that needs maximum bandwidth.  You could specify these two types of flows using the TIF (Topology Indpenednet Forwading) feature on the controller.  Then you could tie them to “IPSLA” tags that will indicate to the controller what characteristics the traffic wants.  The controller is aware of both the overlay and underlay networks, and can coordinate the configuration between them. OnePK is used to inject “application routes” into the underlay network that dictate which path the GRE tunnels take, based on the same tags used in the OpenFlow configuration.

I hope to grab some screenshots and add them to this post, but this seems like a cool use case for the controller. I also hope that some of this functionality will make its way into OpenDaylight.

Adding flows in OpenDaylight using Python and REST API

Building on my last post I’ve put together a small script that will find the shortest path between two switches, then install some flows to create a path between them. I’m leveraging some tools from the NetworkX library in Python and the Northbound REST API on the OpenDaylight controller.

I’m using Mininet as my test network, and the built-in tree topology that was used previously. When loaded into the controller the topology looks like this:
Tree3 Topo
Mininet creates two hosts off each leaf node. The end goal here is to ping between H1 and H8, which are connected to S3 (port 1) and S7 (port 2). First I want to get the shortest path between S3 and S7. To do that I load the nodes(switches) and edges(links) into a NetworkX Graph object:

I wrote a helper function to make building the URLs for the REST calls a little easier. The URL path to get the edges is:/controller/nb/v2/topology/default/

This returns a list of ‘edgeProperties’. An edge has a headNodeConnector and a tailNodeConnector, which are the ports on either end of the edge (each edge is unidirectional, so there will be two entries for each link). These NodeConnectors have nodes associated with them that I use to create tuple to signify an edge. For example, the edge between S4 and S2 has a headNodeConnector of:

            "edge": {
                "tailNodeConnector": {
                    "@type": "OF",
                    "@id": "2",
                    "node": {
                        "@type": "OF",
                        "@id": "00:00:00:00:00:00:00:02"
                "headNodeConnector": {
                    "@type": "OF",
                    "@id": "3",
                    "node": {
                        "@type": "OF",
                        "@id": "00:00:00:00:00:00:00:04"

From which I extract the node id’s (i.e. OpenFlow IDs) to create a tuple:
(00:00:00:00:00:00:00:04, 00:00:00:00:00:00:00:02)
And then add that as an edge to my Graph object. You’ll also see the ID of the NodeConnector itself as one of the attributes, which we’ll use later when defining ingress and output ports for our flows. Here’s code:

# Get all the edges/links
resp, content = h.request(build_url(baseUrl, 'topology', containerName), "GET")
edgeProperties = json.loads(content)
odlEdges = edgeProperties['edgeProperties']

# Put nodes and edges into a graph
graph = nx.Graph()
for edge in odlEdges:
  e = (edge['edge']['headNodeConnector']['node']['@id'], edge['edge']['tailNodeConnector']['node']['@id'])

I do something similar with the nodes, using the URL path: /controller/nb/v2/switch/default/nodes
That returns a list of all the switches in the network, and I add them as nodes to the graph.

# Get all the nodes/switches
resp, content = h.request(build_url(baseUrl, 'switch', containerName) + '/nodes/', "GET")
nodeProperties = json.loads(content)
odlNodes = nodeProperties['nodeProperties']
for node in odlNodes:

Now I run Djikstra’s algorithm provided by the shortest_path method. This returns a list of nodes along the shortest path.

shortest_path = nx.shortest_path(graph, "00:00:00:00:00:00:00:03", "00:00:00:00:00:00:00:07")

In this case the list is:
[00:00:00:00:00:00:00:03, 00:00:00:00:00:00:00:02, 00:00:00:00:00:00:00:01, 00:00:00:00:00:00:00:05, 00:00:00:00:00:00:00:07]

Now I can build a series of flow entries for each switch along the path, not counting the ones directly connected to the hosts. To do this I created a couple functions. find_edge will look for an edge that has a particular head node and tail node. push_path takes a path and the edges from the API call and pushes the appropriate flows to the switches. To create a flow on the switch I use an HTTP POST and send over a JSON object that describes the flow. An example JSON flow entry object:

{"installInHw":"false","name":"test2","node":{"@id":"00:00:00:00:00:00:00:07","@type":"OF"}, "ingressPort":"1","priority":"500","etherType":"0x800","nwSrc":"","nwDst":"", "actions":"OUTPUT=2"}

And the code:

def find_edge(edges, headNode, tailNode):
  for edge in odlEdges:
    if edge['edge']['headNodeConnector']['node']['@id'] == headNode and edge['edge']['tailNodeConnector']['node']['@id'] == tailNode:
      return edge
  return None

def push_path(path, odlEdges, srcIP, dstIP, baseUrl):
  for i, node in enumerate(path[1:-1]):
    flowName = "fromIP" + srcIP[-1:] + "Po" + str(i)
    ingressEdge = find_edge(odlEdges, shortest_path[i], node)
    egressEdge = find_edge(odlEdges, node, shortest_path[i+2])
    newFlow = build_flow_entry(flowName, ingressEdge, egressEdge, node, srcIP, dstIP)
    switchType = newFlow['node']['@type']
    postUrl = build_flow_url(baseUrl, 'default', switchType, node, flowName)
    # post the flow to the controller
    resp, content = post_dict(h, postUrl, newFlow)

def build_flow_entry(flowName, ingressEdge, egressEdge, node, srcIP, dstIP):
  # Since I don't specify the EtherType, it looks like the IP field is ignored
  # Alternatively I could add a second flow with 0x806 for ARP then 0x800 for IP
  defaultPriority = "500"
  newFlow = {"installInHw":"false"}
  ingressPort = ingressEdge['edge']['tailNodeConnector']['@id']
  egressPort = egressEdge['edge']['headNodeConnector']['@id']
  switchType = egressEdge['edge']['headNodeConnector']['node']['@type']
  newFlow.update({"ingressPort":ingressPort, "priority":defaultPriority})
  newFlow.update({"nwSrc":srcIP, "nwDst":dstIP})  # This can probably be ignored for this example
  newFlow.update({"actions":"OUTPUT=" + egressPort})
  return newFlow

def post_dict(h, url, d):
  resp, content = h.request(
      uri = url,
      method = 'POST',
      headers={'Content-Type' : 'application/json'},
  return resp, content

A few things to note:

  • I didn’t specify an EtherType so that it will send any packet over the links. I needed to do this so that ARP messages would make it across. But in doing so, the IP addresses are ignored. This works fine for my simple example, but I think I may revise this later to handle both ARP and IP separately so that there are not such broad flow entries installed. Then I could specify IP match criteria.
  • I chose to not install the flow to the switch immediately so that I can take a look at them before actually putting them to work (InstallInHw : false)
  • I’m not yet error checking on the response, but you should get an HTTP 201 for successful flow entries.

After pushing these flows, I reverse the path, and push the reverse path onto the switches:

push_path(shortest_path, odlEdges, dstIP, srcIP, baseUrl)

Finally I add the entries for the leaf nodes connected to the hosts. There is probably a way to do this with host tracker to make this dynamic, but I just hardcoded it for now:

node3FlowFromHost = {"installInHw":"false","name":"node3from","node":{"@id":"00:00:00:00:00:00:00:03","@type":"OF"},"ingressPort":"1","priority":"500","nwSrc":"","actions":"OUTPUT=3"}
node7FlowFromHost = {"installInHw":"false","name":"node7from","node":{"@id":"00:00:00:00:00:00:00:07","@type":"OF"},"ingressPort":"2","priority":"500","nwSrc":"","actions":"OUTPUT=3"}
node3FlowToHost = {"installInHw":"false","name":"node3to","node":{"@id":"00:00:00:00:00:00:00:03","@type":"OF"},"ingressPort":"3","priority":"500","nwDst":"","actions":"OUTPUT=1"}
node7FlowToHost = {"installInHw":"false","name":"node7to","node":{"@id":"00:00:00:00:00:00:00:07","@type":"OF"},"ingressPort":"3","priority":"500","nwDst":"","actions":"OUTPUT=2"}
postUrl = build_flow_url(baseUrl, 'default', "OF", "00:00:00:00:00:00:00:03", "node3from")
resp, content = post_dict(h, postUrl, node3FlowFromHost)
postUrl = build_flow_url(baseUrl, 'default', "OF", "00:00:00:00:00:00:00:07", "node7from")
resp, content = post_dict(h, postUrl, node7FlowFromHost)
postUrl = build_flow_url(baseUrl, 'default', "OF", "00:00:00:00:00:00:00:03", "node3to")
resp, content = post_dict(h, postUrl, node3FlowToHost)
postUrl = build_flow_url(baseUrl, 'default', "OF", "00:00:00:00:00:00:00:07", "node7to")
resp, content = post_dict(h, postUrl, node7FlowToHost)

Ok, after all that, I can run my script and then go to the controller to see all the flows added.
Now I install them into the switches by clicking on a flow and then ‘Install Flow’.
After the flows have all been installed, I can then try out my pings in mininet:
Mininet Ping 1
And just for good measure I can go over to the Troubleshooting tab on the controller to see the stats of these flows, and see how packets are matching on my flow entries.
You can find the complete script on Github: https://github.com/fredhsu/odl-scripts/tree/master/python/addflow

Making a topology graph with OpenDaylight REST API, Python, and Javascript

The controller as it currently stands already has a nice topology view of the network, but I thought it would be a good exercise to try and make a web page showing the topology using the API. To do this I’ve written a short Python script leveraging the NetworkX library (which will later allow me to do things like use Dijkstra’s algorithm to find the shortest path between two links), D3.js for visualization, and the REST API from the controller.

First I grabbed all the topology data from the controller. This was just a few simple API calls, followed by some stuff to parse through the JSON data:

baseUrl = 'http://localhost:8080/controller/nb/v2/'
containerName = 'default/'

h = httplib2.Http(".cache")
h.add_credentials('admin', 'admin')

# Get all the edges/links
resp, content = h.request(baseUrl + 'topology/' + containerName, "GET")
edgeProperties = json.loads(content)
odlEdges = edgeProperties['edgeProperties']

# Get all the nodes/switches
resp, content = h.request(baseUrl + 'switchmanager/' + containerName + 'nodes/', "GET")
nodeProperties = json.loads(content)
odlNodes = nodeProperties['nodeProperties']

You’ll see we grabbed a list of all the edges (links) and all the nodes (switches). The edges are given in an array called edgeProperties. I take that array and assign it to odlEdges. Here is an example of an edge object:

    "edge": {
      "tailNodeConnector": {
        "node": {
          "@id": "00:00:00:00:00:00:00:06",
          "@type": "OF"
        "@id": "3",
        "@type": "OF"
      "headNodeConnector": {
        "node": {
          "@id": "00:00:00:00:00:00:00:05",
          "@type": "OF"
        "@id": "1",
        "@type": "OF"
    "properties": {
      "timeStamp": {
        "timestamp": "1370292151090",
        "timestampName": "creation"
      "state": {
        "stateValue": "1"
      "config": {
        "configValue": "1"
      "name": {
        "nameValue": "s5-eth1"
      "bandwidth": {
        "bandwidthValue": "10000000000"

You’ll see that an edge has a head node, tail node, and the associated ports. This implies that there are two ‘edges’ for every link, one in each direction.

I also get an array Node objects called nodeProperties. The last line of the above code takes that array and assigns it to odlNodes, here is an example node:

    "node": {
      "@id": "00:00:00:00:00:00:00:07",
      "@type": "OF"
    "properties": {
      "macAddress": {
        "nodeMacAddress": "AAAAAAAH",
        "controllerMacAddress": "aKhtCMic"
      "tables": {
        "tablesValue": "-1"
      "timeStamp": {
        "timestamp": "1370292150118",
        "timestampName": "connectedSince"
      "capabilities": {
        "capabilitiesValue": "199"
      "actions": {
        "actionsValue": "4095"
      "property": null,
      "buffers": {
        "buffersValue": "256"

Next I take all those nodes/edges, and send them to NetworkX in a simpler format with just the info that I need:

# Put nodes and edges into a graph
graph = nx.Graph()
for node in odlNodes:
for edge in odlEdges:
  e = (edge['edge']['headNodeConnector']['node']['@id'], edge['edge']['tailNodeConnector']['node']['@id'])

I’m not really making much use of NetworkX in this example, but one thing I can do is export this simplified graph to a number of different graph formats, or plot the graph. Since I wanted to try and make a web app, I chose to dump it as JSON, then send it over to D3.js for graphing.

d = json_graph.node_link_data(graph)
json.dump(d, open('topo.json','w'))
print('Wrote node-link JSON data')

Here is what the output file ends up looking like:

{"directed": false, "graph": [], "nodes": [{"id": "00:00:00:00:00:00:00:01"}, {"id": "00:00:00:00:00:00:00:03"}, {"id": "00:00:00:00:00:00:00:02"}, {"id": "00:00:00:00:00:00:00:05"}, {"id": "00:00:00:00:00:00:00:04"}, {"id": "00:00:00:00:00:00:00:07"}, {"id": "00:00:00:00:00:00:00:06"}], "links": [{"source": 0, "target": 2}, {"source": 0, "target": 3}, {"source": 1, "target": 2}, {"source": 2, "target": 4}, {"source": 3, "target": 5}, {"source": 3, "target": 6}], "multigraph": false}

I decided to leave the graph as undirected to just show a single link between the nodes. Now I can write some Javacript to graph everything with D3.js. I’m using a force-directed algorithm to lay everything out:

Now we can wrap it up in HTML and see our graph.
Screen Shot 2013-06-03 at 1.42.57 PM
This ended up being a pretty easy one, but it did help familiarize me with how the topology API works. You can find all the code here:

OpenDaylight with Scala

I’ve always been interested in learning functional programming, and wanted to try learning a few of the languages such as Scala.  Since Scala is a JVM based language, it should be able to interoperate with OSGi pretty well.  I figured if I can build a bundle jar with the appropriate manifest, then it will be all good.  So here’s my attempt at making Scala work with OSGi and OpenDaylight.

The first piece is SBT (Scala Build Tool), and the appropriate plugins.  To use SBT, first we create the directory structure:

| mystats-scala
| - project
| - src
    | - main
    | - scala

To handle some of the OSGi stuff I used the SBT BND plugin, which is added to the project/plugins.sbt file:

addSbtPlugin("com.typesafe.sbt" % "sbt-osgi" % "0.4.0")

You can find out more about this plugin here: http://wiki.osgi.org/wiki/SbtScalaBndToolchain

Next I created a build.sbt in the root of my project directory.  In this file I declare my dependencies (OSGi core, Apache Felix, and parts of ODL), and tell it to look at my local Maven repository to resolve them (since I’ve already done a mvn build install of the ODL stuff):

name := "mystats"

version := "1.0.0"

libraryDependencies ++= Seq(
	"org.osgi" % "org.osgi.core" % "4.3.0" % "provided",
	"org.apache.felix" % "org.apache.felix.framework" % "4.0.3" % "runtime",
	"org.opendaylight.controller" % "sal" % "0.4.0-SNAPSHOT",
	"org.opendaylight.controller" % "statisticsmanager" % "0.4.0-SNAPSHOT"

resolvers += "Local Maven Repository" at "file:///"+Path.userHome+"/.m2/repository"

Thanks to the plugin above, I can now add in OSGi information:


OsgiKeys.bundleActivator := Option("com.example.mystats.Activator")

OsgiKeys.importPackage := Seq(

OsgiKeys.exportPackage := Seq("com.example.mystats")

OsgiKeys.additionalHeaders := Map(
    "Service-Component" -> "*",
      "Conditional-Package" -> "scala.*"

Now I can write my OSGi application in Scala. In my src/main/scala directory I create my Activator.scala file:

You’ll see that I have to do some Java conversions to create a Scala Set out of the Java Set, but overall its a pretty clean translation.
To get my jar file I use sbt:

$ sbt osgi-bundle

This creates my jar in the target/scala-2.9.2 directory. I can now install my file into the OSGi framework

osgi> install file:///home/fhsu/code/scala/hello/target/scala-2.9.2/mystats_2.9.2-1.0.0.jar

Then I can start the bundle and see the results

osgi> start 150
Start OSGi Bundle...
Start class
interface org.opendaylight.controller.switchmanager.ISwitchManager
Node: OF|00:00:00:00:00:00:00:07
Node: OF|00:00:00:00:00:00:00:06
Node: OF|00:00:00:00:00:00:00:05
Node: OF|00:00:00:00:00:00:00:04
Node: OF|00:00:00:00:00:00:00:03
Node: OF|00:00:00:00:00:00:00:02
Node: OF|00:00:00:00:00:00:00:01

I think this shows yet another cool way OSGi allows for modularity. Now we can even mix and match different languages! You can find the project code here: