More Go concurrency using pipelines with eAPI

As a follow on to my previous post on using Go channels for concurrency, I thought I would try and use the pipeline pattern as well.  The idea is to create a series of goroutines that you can string together through channels.  This allows you to mix and match (compose) small functions to build the final result you want.  Its like using the ‘|’ operator in Unix.  For this example I’m going to take a few different show commands I want to run, create pipelined functions out of them, then string them together to pull down the final result I want.

For this example I will go grab the show version and show running-config of a series of Arista switches.  I’ve defined a json file to store the switch names and connection information.  Here is a short function to read in that file and parse the JSON data:

func readSwitches(filename string) []EosNode {
	var switches []EosNode
	file, err := os.Open("switches.json")
	if err != nil {
	decoder := json.NewDecoder(file)
	err = decoder.Decode(&switches)
	if err != nil {
	return switches

To store all the information I created a struct with fields for the relevant data (there are some extra fields here for future use):

type EosNode struct {
	Hostname      string
	MgmtIp        string
	Username      string
	Password      string
	Ssl           bool
	Reachable     bool
	ConfigCorrect bool
	Uptime        float64
	Version       string
	Config        string
	IntfConnected []string
	IpIntf        []string
	Vlans         []string

Now I start writing my functions.  There are three types of functions that we need.  First I will write a producer that starts the whole thing off by generating channels for each switch (in this case it will be EosNodes).  Then intermediate functions will take actions on those channels, and return a new channel with an EosNode.  Finally the consumer will take the channels and produce the final result.

The producer (or generator) will take a list of EosNodes, then kick off goroutines for each switch and tie them into the out channel, which I return from the function:

func genSwitches(nodes []EosNode) <-chan EosNode {
	out := make(chan EosNode)
	go func() {
		for _, node := range nodes {
			out <- node
	return out

Now the intermediate functions that receive EosNodes from the channel, runs the eAPI call to fill in more data, then returns a new outbound channel with the new data populated in the EosNode:

func getConfigs(in <-chan EosNode) <-chan EosNode {
	out := make(chan EosNode)
	go func() {
		for n := range in {
			cmds := []string{"enable", "show running-config"}
			url := buildUrl(n)
			response := eapi.Call(url, cmds, "text")
			config := response.Result[1]["output"].(string)
			n.Config = config
			out <- n
	return out
func getVersion(in <-chan EosNode) <-chan EosNode {
	out := make(chan EosNode)
	go func() {
		for n := range in {
			cmds := []string{"show version"}
			url := buildUrl(n)
			response := eapi.Call(url, cmds, "json")
			version := response.Result[0]["version"].(string)
			n.Version = version
			out <- n
	return out

Note: I had a small helper function in there called buildUrl to create the eAPI URL.

Finally the consumer (or sink) in this case is just a for loop in main() that grabs the results from the channel:

	for i := 0; i < len(switches); i++ {
		node := <-out

This comes after I call my functions, so the whole main() function looks like this:

func main() {
	swFilePtr := flag.String("swfile", "switches.json", "A JSON file with switches to fetch")
	flag.Parse() // command-line flag parsing
	switches := readSwitches(*swFilePtr)
	fmt.Println("############# Using Pipelines ###################")
	c1 := genSwitches(switches)
	c2 := getConfigs(c1)
	out := getVersion(c2)
	for i := 0; i < len(switches); i++ {
		node := <-out

In the above I start with the producer that creates a channel c1, then getConfigs takes that and produces a new channel c2 after processing. c2 is then fed into getVersion to produce yet another channel. Finally we consume it all. If I were to add more functions, I could keep chaining those channels together to grab all kinds of data from the switches. Here’s the complete program:

Network scripting using concurrency with Go, Goroutines, and eAPI

Some cool things about Go are that concurrency is built in, and it compiles fast.  Although its possible to create concurrent programs in Python, its much easier to do with Go.  For network scripting this might seem unnecessary, but if you’re running some scripts on a lot of switches, it could make a big difference.  Here’s a simple example I created to illustrate:


First I created a simple eAPI Python script to grab a show running-config from a single switch:

I ran it with the Linux command time to see how long it takes to fetch the config: 0m0.852s

Now I added three more switches, and ran this script:

And the time for this one: 0m3.188s
Roughly 4x the time for one switch, which is what you would expect.

Go without goroutines

Now lets try something similar in Go, first running a single switch:

First I’m going to run it with ‘go run’ which gives this time: 0m1.191s
A little slower than Python, but that includes compilation and running the program! Now we can speed it up a little by first compiling with ‘go build’ then just running the executable: 0m0.791s
Now its slightly faster than Python.

Ok now let’s do four switches:

Timed using go run shrun4.go: 0m3.601s
And when precompiled: 0m3.073s
As expected, we see a 4x increase in time for 4x the switches.

Go with multiple Goroutines

Now for the fun part, adding in concurrency. To do this we’re going to use goroutines. A goroutine is a function that is capable of running concurrently with other goroutines and is very lightweight (lighter than a thread). First I’m going to take the part of the code that gets the config and put it in a function which will become our goroutine:

func configFetcher(url string, cmds []string, format string, c chan eapi.JsonRpcResponse) {
response := eapi.Call(url, cmds, format)
c <- response

You’ll notice the variable c which is a channel. I’m not going to go into all the details here, but channels allow for communication between different goroutines, and here we’ve define a channel that will pass a JsonRpcResponse between goroutines. We send this data using the <- operators, so the line: c <- response takes the response and sends it over the channel.

Now in our main function we create four channels to get the data back from our goroutine. There is probably a better way to do this, but this was a simple way for me to conceptualize it:

c1 := make(chan eapi.JsonRpcResponse)
c2 := make(chan eapi.JsonRpcResponse)
c3 := make(chan eapi.JsonRpcResponse)
c4 := make(chan eapi.JsonRpcResponse)

Now we create our goroutines by simple adding go to the front of the function call, and pass in our channels:

go configFetcher(url1, cmds2, "text", c1)
go configFetcher(url2, cmds2, "text", c2)
go configFetcher(url3, cmds2, "text", c3)
go configFetcher(url4, cmds2, "text", c4)

And now I grab the response from the channels:

msg1 := <- c1
msg2 := <- c2
msg3 := <- c3
msg4 := <- c4

Here’s the whole thing in one piece:

Now let’s run it and see what happens. With go run I get: 0m1.330s
Hitting four switches is now about the same as one! And if we precompile: 0m0.865s


# switches Python go run Go compiled go run with goroutines goroutines compiled
1 0.852s 1.191s 0.791s n/a n/a
4 3.188s 3.601s 3.073s 1.330s 0.865s

Having concurrency built-in makes it easier to create programs that make use of the multi-core processors and get some nice performance gains.
While a network script might not seem to need this added complexity, we can see a significant performance boost with just four switches. Imagine if this were for tens or hundreds of switches. It could cut down the time it takes to run a script from minutes to seconds, and thanks to Go, its not that hard to do.

eAPI Python script to look at ARP entries per VRF

I needed to see all the different ARP entries in each VRF, so I wrote up this little script to do just that. The ‘show vrf’ command in eAPI has not yet been converted to JSON, so I had to do some text parsing to get the VRF names, then use those names to grab the ARP entries. On line 4 you’ll see that I use the ‘text’ option for the output of the JSON reply. That allows me to run a command that hasn’t been converted yet and get the raw text output:

response = switch.runCmds( 1, ["show vrf"], "text" )

The output looks like this:

"output": "   Vrf         RD            Protocols       State         Interfaces \n----------- ------------- --------------- ---------------- ---------- \n   test        100:100       ipv4            no routing               \n   test2       101:101       ipv4            no routing               \n   test3       102:102       ipv4            no routing               \n\n"

Or in a more familiar format:

   Vrf         RD            Protocols       State         Interfaces
----------- ------------- --------------- ---------------- ----------
   test        100:100       ipv4            no routing
   test2       101:101       ipv4            no routing
   test3       102:102       ipv4            no routing

Then I take the output and use splitlines() to take each line (separated by newline) and insert them into a list:

lines = response[0]['output'].splitlines()

Now I iterate through each entry of the ‘show ip vrf’ output and issue a ‘show ip arp vrf’ with the VRF name. I use the range() function, starting at the 3rd line (since the first two are just header lines), and go through the end of the list. Then I use the split() method to split each line on whitespace, taking the first entry which corresponds to the VRF name. Finally, I can use that VRF name in my command.

for i in range(2, len(lines) - 1):
  vrfname = lines[i].split()[0]
  command = "show ip arp vrf " + vrfname

Here’s the script in its entirety:

eAPI script to try different IP addresses

I’ve been trying to use code to solve more problems around the lab lately, and thought I’d start posting some of the little scripts I write. Today I had plugged a device (device A) into a switch and didn’t know what the device had set for its gateway (I did have the IP of the device itself, and it wasn’t .1 or .254). I didn’t have access to the configuration, so I thought I’d write a script to go through possible IP addresses and see if one of them would take until the owner got back to me. I had another machine trying to ping the IP address from a different subnet, so if the right gateway address was configured on the Ethernet port, I should start getting pings:

Device A — Switch — Test pinging machine

Now I started my constant ping from to, then I created and ran this Python script to find the right address.

Arista eAPI from Microsoft PowerShell

I haven’t really played around with Windows in a while, but I’ve had a few people show me some cool things in PowerShell, so I thought I’d give it a try with eAPI. Here’s a really simple script that is able to fetch information from an Arista switch, and put it into an PowerShell object so that it can be used for whatever you’d like. Now I just need to find an excuse to buy a Surface Pro 2 🙂

I start off by just setting up some variables for the username, password, etc.. Variables in PowerShell start with a $ sign.

$username = "admin"
$password = "admin"
$switchIp = ""

I’m able to insert variables directly into the string for the URL.

$eApiUrl = "https://$switchIp/command-api"

Now I create an array to hold the commands I want to send, and put that inside a hash table. Arrays are created with @() and Hash tables (Dictionaries in Python, Maps in Go) with @{}:

$cmds = @('show version')
$params = @{version= 1;cmds= $cmds; format="json"}

Now I create a new PowerShell object with all the required fields. PowerShell has this cool pipe operator (|) like Unix and Elixir. This allows you string together a bunch of stuff, in this case we end with piping the output to ConvertTo-Json to turn the object into a JSON string. Then I have to convert that string into an ASCII one to make it web friendly:

$command = (New-Object PSObject | Add-Member -PassThru NoteProperty jsonrpc '2.0' |
Add-Member -PassThru NoteProperty method 'runCmds' |
Add-Member -PassThru NoteProperty params $params |
Add-Member -PassThru NoteProperty id '1') | ConvertTo-Json
$bytes = [System.Text.Encoding]::ASCII.GetBytes($command)

After we have our command ready to go, we create the web connection and POST the JSON-RPC call. I also tell the system to ignore the web certificate since I haven’t installed the cert for the SSL connection.

[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
$web = [System.Net.WebRequest]::Create($eApiUrl)
$web.Method = "POST"
$web.ContentType = "application/json"
$web.Credentials = New-Object System.Net.NetworkCredential -ArgumentList $username, $password
$stream = $web.GetRequestStream()
$stream.Write($bytes, 0, $bytes.Length)

Finally we take the response, and put it back into a PowerShell object using ConvertFrom-Json

$reader = New-Object System.IO.StreamReader -ArgumentList $web.GetResponse().GetResponseStream()
$response = $reader.ReadToEnd() | ConvertFrom-Json

Once we’ve got an object we can pull out pieces of the response:

Write-Host "Model is: " + $response.result.modelName

This is what the output looks like:
Screen Shot 2014-08-20 at 4.44.14 PM

Here’s the full script from start to finish:

Longest Prefix Match in Go part 2

In case you’re interested here is part 1: Part 1

Now that I can convert IP subnets to binary, its time to do some prefix matching.

Naive Implementation

As a benchmark to be used later I started with a really simple naive implementation of LPM that just does a search through a list of prefixes (represented as string) to find the longest match:

func routeMatch(r, p string) bool {
    return strings.HasPrefix(r, p)

func NaiveFind(r string, routes []string) string {
    best := ""
    for _, elem := range routes {
       if routeMatch(r, elem) && len(r) > len(best) {
           best = r
    return best

The routeMatch function just wraps the string.HasPrefix function to determine if a particular route we’re looking up matches the prefix we’re currently looking at. In the NaiveFind function I go through every route in the routes slice, checking if the route matches a prefix. Since we’re doing LPM, I also check to see if the length of the prefix is longer than the best prefix match so far. If so, I set the current prefix to be the best. Then at the end I return the best prefix. This simple lookup will grow linearly with the size of the prefix list, since we have to go through every prefix in the list to do a lookup. I could probably do some form of sorting to help, but due to the varying prefix lengths it could get messy. Next I’ll implement the Trie and show how much better it works for this lookup.

Longest Prefix Match algorithm in Go part 1

One of the lookups a router/switch has to do is a longest prefix match (LPM), which is normally done in hardware with some sophisticated algorithms. But since I’m no ASIC designer, I thought I’d play around with doing it in software. If everything had fixed prefix sizes (i.e. no VLSM), things would be easier. We could do a direct lookup for the prefix in question and get an answer. Since we have varying masks, we can no longer just do a lookup, we have to find the best match based on what matches the most bits.

As a programming exercise for learning Go, I decided to try out a couple different methods of doing LPM:
1) Naive approach : search through every route and find the best LPM. Worst case for this would be O(n) where n = the number of routing table entries. This is fine for a very small routing table, but grows linearly with the size of the routing table. Essentially this means if we add 10x more routes, it will take roughly 10x longer. It also requires O(n) storage for all the routes.

2) Use a trie data structure : Using this for LPM would max out at 32 steps for an IPv4 address, and would be less for a shorter prefixes. This builds a trie that maps each bit to a branch. By doing so, the worst case scenario is to go all the way to a host route of /32, which is our worst case. No matter how large our routing table gets, we will never do more than 32 lookups. In addition, there is some space savings for overlapping prefixes (i.e. if you have and, they would share common branches). This could be further compressed using a Radix Tree.

Given what I’ve outlined above, what I expect to see is #1 to outperform #2 for small tables (< 30 routes). But as the table size grows, #2 should stay the same, and #1 should take longer and longer. Current Internet BGP tables are at 500,000k+ routes so we definitely want efficiency when doing these lookups.

For this project I went with a Test Driven Development (TDD) methodology where I wrote the unit tests first, then created the code to make it work. Then I created some benchmark tests (an awesome feature of Go testing) to see if using the trie performed like I hoped.

Test file

Here’s the beginning of the test file:

package trie

import (

This puts the test file in the trie package and imports the libraries that I’ll be using for the tests.

Helper Functions

First I wrote a couple helper functions to convert IP addresses to a string of binary digits. There are two functions here, one for converting a single byte/octet and another to do the entire dotted decimal IP address/mask. Here are the tests for those functions:

func TestConvertOctet(t *testing.T) {
	if ConvertOctet("10") != "00001010" {
		t.Errorf("want %v", "00001010")
	if ConvertOctet("0") != "00000000" {
		t.Errorf("want %v", "00000000")
	if ConvertOctet("255") != "11111111" {
		t.Errorf("want %v", "11111111")
	if ConvertOctet("255") == "00000000" {
		t.Errorf("want %v", "11111111")

func TestConvert(t *testing.T) {
	if Convert("") != "00001010" {
		t.Errorf("want %v", "00001010")
	if Convert("") != "0000101000000001" {
		t.Errorf("Got %v, want %v", Convert(""), "0000101000000001")
	if Convert("") != "11010010011100001001" {
		t.Errorf("Got %v, want %v", Convert(""), "11010010011100001001")
	if Convert("") != "11111111111111111111111111111111" {
		t.Errorf("Got wrong binary")

You’ll see for each testing function we pass a pointer to a test variable that handles the state and logs of the testing. Also every function begins with Test… that will trigger these functions to be called when you do a go test. Initially these tests will fail of course, then its up to me to make them work. The cool thing is that as the code develops, these will always get tested to make sure I don’t break anything later. Now let’s write the code to do these conversions. First converting a single octet:

// Converts an IP octet (0-255) to binary with 8 bits
func ConvertOctet(oct string) string {
        i, err := strconv.ParseInt(oct, 10, 64)
        if err != nil {
        j := strconv.FormatInt(i, 2)
        pad := strings.Repeat("0", 8-len(j))
        return strings.Join([]string{pad, j}, "")

There is probably a better way to do this, but I took the octet that is passed as a string, and converted it to an integer. Then I take that integer and reformat it as a string of binary digits. Finally I create some padding to put zeros in front if its not 8-bits long, and join the padding to the binary string. Now we can use this to convert an IP address:

// Converts a / address to binary
func Convert(addrMask string) string {
        parts := strings.Split(addrMask, "/")
        ip := strings.Split(parts[0], ".")
        var buffer bytes.Buffer
        mask, err := strconv.Atoi(parts[1])
        if err != nil {
        for i := 0; i < 4; i++ {
        return buffer.String()[:mask]

For this function I take the IP address which will be in the form of “x.x.x.x/24”, and split off the mask. Then I split the individual octets into a slice. I convert the mask (i.e. 24) into an integer to be used later. Then I go through each octet, converting them to binary values using the ConvertOctet function, appending each binary value to the buffer. Finally I return the buffer, converted to a string, and slicing off the host part using the mask as an index to the slice.

In the next part I’ll go into the naive implementation.