PANTHEON.tech @ ONS 2019

Our PANTHEON.tech all-star team, comprised of Juraj VeverkaMartin Varga, Róbert Varga & Štefan Kobza visited the Open Networking Summit North America 2019, in the sunny town of San Jose, California.

In the span of 3 days, keynotes and presentations were held with notable personalities and companies from various network companies & technological giants.

“The networking part, was a welcoming mixture of old acquaintances & new faces interested in our products, solutions and in the company in general.”

“It was crucial to present our ideas, concepts & goals by short messages or few slides. Since the approximate time of our booth visit was 2-3 minutes, I had to precisely focus and point out why we matter.” (who is saying it?) This was not an easy task, since there were companies like  Ericsson, Intel, Huawei or IBM at the conference.

Our Skydive/VPP Demo

“Before the actual conference, we were asked to come up with potential demo ideas, which would be presented at ONS. Our SkyDive – VPP Integration demo was accepted and opened the doors to a wide interest in PANTHEON.tech.” As Juraj recollects, after the keynote, waves of visitors swarmed the nearest booths, including ours and our Skydive – VPP Integration demo:

Skydive is an open source, real-time network topology, and protocols analyzer. It aims to provide a comprehensive way of understanding what is happening in the network infrastructure.

We have prepared a demonstration topology containing running VPP instances. Any changes to the network topology are detected in real time.

Skydive is a tool that can quickly show, what is currently happening in your network. VPP serves the purpose of fast packet processing, which is one of the main building blocks of a (virtual) network. If anybody wants a fast network – VPP is the way to go.

Connecting Skydive & VPP is almost a necessity, since diagnosing VPP issues is a lot more difficult without using Skydive. Skydive leads you towards the source of the issue, as opposed to the time-consuming studying of the VPP console.

The importance of this demo is based on the need to extract data from VPP. Since the platform does not easily provide the required output, this provides a birds-eye view on the network via Skydive.

“The Skydive demo was a huge hit. Since it was presented in full-screen and had its own booth, we gained a lot of attention due to this unique solution.” Attendees were also interested in our Visibility Package solution and what kind of support PANTHEON.tech could provide for their future network endeavors.

Improvements & ONS 2020

“Inbetween networking, presenting and attending keynotes, we have managed to do some short sightseeing and enjoyed the city of San Jose a little bit. But of course, work could not have waited as we have been excited to come back to the conference atmosphere each day.”

But how was ONS 2019 in comparison to last years conference? “In comparison to last year, the Open Networking Summit managed to expand & increase in quality. We are glad that we made many new acquaintances, which we will hopefully meet again next year!”

We would like to thank the Linux Foundation Networking for this opportunity. See you next year!

[PyConSK19] Automated visualization and presentation of test results

PANTHEON.tech’s developer Tibor Frank recently held a presentation at PyConSK19 in Bratislava. Here is his full-length presentation and some notes on top of it.

Motivation

Patterns, trends, and correlations that might remain undetected in text-based data can be exposed and recognized easier with data visualization.

Why do we need to visualize data? Because people like pictures and they hate piles of numbers. When we want or need to provide data to our audience, the best way how to communicate it is to display it. But we must display it in a form and structure our intended audience can consume. They must process the information in a fraction of second and in the correct way.

A real-world example

When I was building my house, a sales representative approached me and proposed, that he has the best thermal isolation for houses on the market.

He told me that it is 10% better than the second one and showed me this picture. Not a graph, a picture, as this is only a picture with one blue rectangle and two lines. From this point, I wasn’t interested in his product anymore. But I was curious about the price of this magic material. He told me that it is only 30% more expensive than the second one. Then I asked him to leave.

Of course, I was curious so I did some research later in the evening. I found the information about both materials and I visualized it with a few clicks in Excel.

What’s the difference between these two visualizations? The first one is good only as an illustration in marketing materials.

From a technician point of view, I am missing information about what the graph is trying to show me:

  • Title
  • Axes with titles
  • Numbers
  • Physical quantities
  • Legend

The same things which we were told to use at elementary school. It’s simple, isn’t it?

To be honest, this graph confirmed that his product is 10% better, but still, 30% more expensive.

Attributes of a good design

When we are talking about visualization we must also talk about design. In general, there are four attributes of a good design. It must be:

  • Beautiful –  because people must find pleasure in it
  • Gratifying – to enjoy it
  • Logical –  everything should be in the right place; it must be self-descriptive, no need for further explanation
  • Functional – the interactions between components must work and it must fit the bigger picture

Besides these attributes, we should choose the right point of view. Look at these two pictures. Which one do you like the best?

Do we want to show details? Or a bigger picture? Or both?

We need to decide before we start according to the nature of data we visualize, the information we want to communicate and the intended audience.

We must ask ourselves who will be the customer for our visualized data.

  • Management
  • Marketing
  • Customers
  • Skilled professionals
  • Anyone

We should keep it in mind while preparing the visuals.

What do we visualize?

Data. Lots of data. 30GB of data, twice a day. Our data is the result of performance tests. I am working on an FD.io project which performs Continuous System and Integration Testing (CSIT) of the Vector Packet Processor (VPP). As I said, we focus on performance. The tests measure the packet throughput and packet latency in plenty of configurations of the VPP. For more information, visit FD.io’s website. 

The Fast Data Project So, what do we visualize?

1. Performance test results for the product release:

  • Packet throughput
  • Packet Latency
  • Speedup multi-core throughput – increase in performance if we increase the number of processor cores used for data processing

2.Performance over a defined time period:

  • Packet throughput trend

Where does the data come from?

FD.io CSIT hierarchy (https://docs.fd.io/)

Jenkins runs thousands of tests on our test-beds and provides the results as robot frameworks’ output.xml files. The files are processed and the data from them visualized. As a final step we use Sphinx to generate html pages which are then published on the web.

What we use

Nothing surprising, we use standard python tools:

  • Python 2.7 3.6
  • Numpy
  • Pandas
  • Plot.ly
  • Sphinx

I will not bother you with these tools, you might know them better than me.

Plots

We use Plot.ly in offline mode to generate various kinds of plots. Hundreds of dynamically generated plots are then put together by Sphinx to create the release Report or the Continuous Performance Trending.

I’ll start with the plots published in the Report. In general, we run all performance tests ten times to prevent any anomalies in the results. We calculate the mean value and standard deviation for all tests and then work with them.

Packet throughput – Statistical box plot

The elementary information about the performance is visualized by the statistical box plot. This kind of plot provides us all information about statistical data – minimum, first quartile, median, third quartile, maximum and outliers. This information is displayed in the hover box.

As you can see, the X-axis lists indices of individual test suites as listed in Graph Legend; and the Y-axis presents measured Packet throughput values [Mpps]. Both axes start with zero value so we know the scale. The tests in each plot are grouped and ordered by the chosen criteria. This grouping is written in the plot title (area, topology, processor architecture, NIC, frame size, number of cores, test type, measured property).

From this graph we can also see the variability of results – the results in the first graph are in all runs almost the same (there are lines instead of boxes), the results in the second one vary in a quite big range. It says that the reliability of these results are lower than in the first case and there might be an issue in the tested functionality, tests or infrastructure & more.

Packet Latency – Scatter plot with error bars

When we measure the packet latency, we get minimal, average and maximal values in both directions of data flows. The best way we found to visualize it, is the scatter plot with error bars.

The dot represents the average value and the error bars the minimum and maximum values.

The rest of the graph is similar to the previous one.

Speedup – Scatter plot with annotations

Speedup is the increase of the packet throughput if we increase the number of processor cores used for data processing. We measure the throughput using 1, 2 and 4 cores.

Again, we use the Scatter plot. The measured values are represented by dots connected by solid lines. In the ideal case, it would be a straight line. But it is not. So we added dashed lines to show how it would be in the ideal world. In real life, there are limitations not only in software but also in hardware. They are shown as dotted lines – Link, NIC, and PCIe bus limits. These limits cannot be overcome by the software.

3D Data

Some of our visualizations present three-dimensional data, in this example, the packet throughput is measured in a configuration with two changing parameters.

The easiest way to visualize it is to use excel and by a few clicks, we can create a graph like this one (above). It looks quite good but:

  • The orientation in the graph is not so easy. What if there were 20 samples in a row?
  • It is difficult to read the results
  • In some cases, which are very probable, some bars can be hidden behind another one
  • And Plot.ly does NOT support this kind of graph because, as they say, it is not needed. And they are right

So we had to look for a better solution. And we found the heat-map. It presents three-dimensional data in two-dimensional space. It is easy to process all information at one quick sight. We can quickly find any anomalies in this pattern as we expect the highest value to be in the top left corner and decreasing to the right and down.

Packet throughput trending – Scatter plot

 

The trending graphs show us the trend of packet throughput over the chosen time period which is 90 days in this case. The data points (again average value of 10 samples) are displayed as dots. The trend lines are calculated by JumpAvg algorithm developed by our colleague. It is based on the minimum description length (MDL) principle.

What is important in the visualization of a trend, are changes in the trend. We mark them by color circles: red for regression and green for progression. These changes in trend can be easily spotted by testers and/or developers so we immediately know the effect of merged changes on the product performance.

Tibor Frank

[lighty.io] OVSDB & OpenFlow

OVSDB and OpenFlow controller based on lighty.io

PANTHEON.tech has recently published an example application of an SDN controller, using RESTCONF Northbound module and OVSDB + OpenFlow Southbound modules.

In this blog we are going to describe, how to use the SDN controller with an Open vSwitch instance running in OpenStack.

Open vSwitch, OVSDB & OpenFlow

Open vSwitch is open source virtual switch which uses OVSDB (Open vSwitch Database) and OVSDB management protocol for management of virtual OpenFlow switches referred to as bridges.

The bridges are configurable by OpenFlow protocol according to OpenFlow switch specification.

We have already written a blog about OpenFlow protocol and its support by lighty.io. There’s also example SDN controller application called lighty-community-restconf-ofp-app. It utilizes RESTCONF northbound module and OpenFlow southbound module only. You can find it at our GitHub repository.

Connecting to Open vSwitch of OpenStack

In this blog, we will show you some example setup and sequence of requests using both OVSDB and OpenFlow protocols implemented as SB modules in lighty.io.

We have published an example SDN controller called lighty-community-restconf-ovsdb-app, which utilizes RESTCONF northbound module and OVSDB southbound module only. You can check its README.md file and Postman collection for more details.

As we have already mentioned, Open vSwitch used in the testing setup is running in OpenStack. Here is a picture and description of the setup:

 

The SDN controller is running on a machine with IP address 10.14.0.160 with RESTCONF NB plugin & opening Port 8888.

  • Postman or curl requests are submitted from the same machine where SDN controller is running, so URLs use localhost address (127.0.0.1)
  • Open vSwitch instance is running on a machine with IP address 10.14.0.103 (the same address used in ovs-vsctl command explained above) The tested instance has been set up by DevStack scripts but it can be any Open vSwitch instance running in OpenStack Network node, compute node or outside of OpenStack.
  • TCP port used for OVSDB server(s) is 6640
  • TCP port used by OpenFlow server(s) is 6633

Example workflow

Subsequent requests can be found in the repository at GitHub, in the form of Postman collection. In the README.md & in this blog, we are using a curl command to send the RESTCONF requests. We are also using the Python module json.tool for printing of JSON responses.

1. Configure OVSDB manager of Open vSwitch

The following piece of the ovsvsctl show command output shows the initial configuration of the Open vSwitch and its state, where we can see that the Neutron service is connected as OVSDB manager (Manager “ptcp:6640:127.0.0.1”). The Neutron service is also configured as OpenFlow controller for the bridge br-tun (Controller “tcp:127.0.0.1:6633”) and br-tun is connected to the controller.

sudo ovs-vsctl show
729321e6-991d-4ae5-a4f9-96b1e2919596
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}

Using this ovs-vsctl command we set up Open vSwitch to listen to second OVSDB manager connection at TCP port 6640 and an interface with IP address 10.14.0.103. The command keeps the configuration for Neutron service as OVSDB manager (ptcp:6640:127.0.0.1).

sudo ovs-vsctl set-manager ptcp:6640:127.0.0.1 ptcp:6640:10.14.0.103

The output of “ovs-vsctl show” command must be changed, as a result of the command above. There should be two managers configured, but only one of them is connected. The connected one is Neutron service and we have to start and configure the SDN controller, in order to initiate OVSDB connection from the controller’s side.

sudo ovs-vsctl show
729321e6-991d-4ae5-a4f9-96b1e2919596
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Manager "ptcp:6640:10.14.0.103"
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}

2. Setup OVSDB connection

This RESTCONF request results in OVSDB session initiation to the pre-configured OVSDB server in the Open vSwitch.

curl -v --request PUT \
  --url http://127.0.0.1:8888/restconf/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2FHOST1 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Content-Type: application/json' \
  --data '{
        "network-topology:node": [
          {
            "node-id": "ovsdb://HOST1",
            "connection-info": {
              "ovsdb:remote-port": "6640",
              "ovsdb:remote-ip": "10.14.0.103"
            }
          }
        ]
      }'

You can check whether the session has been established using the “ovs-vsctl” show command. Both OVSDB managers should be connected now. You can also use the next RESTCONF request.

sudo ovs-vsctl show
729321e6-991d-4ae5-a4f9-96b1e2919596
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Manager "ptcp:6640:10.14.0.103"
        is_connected: true
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}

3. Retrieve OVSDB network topology data (all nodes)

Now, we are going to ask lighty.io (meaning the SDN controller) about the state of OVS. In this step, we are going to access the SDN controller through a RESTCONF request. The controller will then create a request via the OVSDB protocol and OVS will create the same request, as shown in OVS-VSCTL show.

This RPC request returns the same data as an output of the ovs-vsctl show command. But the show command returns text output. In case of RPC request, the output is formatted as JSON or XML (depending on the Accept header) what is more appropriate for API between software layers of SDN solutions (i.e.: RPCs returning JSON or XML formatted output of commands are SDN ready).

NOTE: In this state, the OVSDB connection between SDN controller and Open vSwitch is established. There’s also another connection which is used by Neutron service. You can check this in the output of ovs-vsctl show:

Manager "ptcp:6640:127.0.0.1"
    is_connected: true
Manager "ptcp:6640:10.14.0.103"
    is_connected: true

… and also in the output of RESTCONF request:

"ovsdb:manager-entry": [
    {
        "target": "ptcp:6640:127.0.0.1",
        "connected": true,
        "number_of_connections": 5
    },
    {
        "target": "ptcp:6640:10.14.0.103",
        "connected": true,
        "number_of_connections": 1
    }
]

The only OpenFlow connection(s) in this state are used by Neutron service, you can see it in the output of ovs-vsctl show:

    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
    Bridge br-ex
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
... and the same for other bridges

… and also in the output of RESTCONF request:

... all bridges should have:
                    "ovsdb:controller-entry": [
                        {
                            "target": "tcp:127.0.0.1:6633",
                            "controller-uuid": "de378546-d727-4631-8d46-fa57d78737d9",
                            "is-connected": true
                        }
                    ],

Here is an example of complete ovs-vsctl show command output:

sudo ovs-vsctl show
729321e6-991d-4ae5-a4f9-96b1e2919596
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Manager "ptcp:6640:10.14.0.103"
        is_connected: true
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "tap2f509846-a3"
            tag: 4
            Interface "tap2f509846-a3"
                type: internal
        Port "qr-40a33ce6-dd"
            tag: 6
            Interface "qr-40a33ce6-dd"
                type: internal
        Port "tape9302402-e4"
            tag: 1
            Interface "tape9302402-e4"
                type: internal
        Port "tap63c483cb-87"
            tag: 6
            Interface "tap63c483cb-87"
                type: internal
        Port "tap960bd59d-2e"
            tag: 5
            Interface "tap960bd59d-2e"
                type: internal
        Port "tap74a59f96-94"
            tag: 3
            Interface "tap74a59f96-94"
                type: internal
        Port "qg-9285bad8-81"
            tag: 2
            Interface "qg-9285bad8-81"
                type: internal
        Port "tap3792b4af-27"
            tag: 7
            Interface "tap3792b4af-27"
                type: internal
        Port int-br-infra
            Interface int-br-infra
                type: patch
                options: {peer=phy-br-infra}
        Port "qr-9da1a177-1a"
            tag: 7
            Interface "qr-9da1a177-1a"
                type: internal
        Port "qg-7f8467e0-a4"
            tag: 2
            Interface "qg-7f8467e0-a4"
                type: internal
        Port "qr-7da2c452-59"
            tag: 1
            Interface "qr-7da2c452-59"
                type: internal
        Port "qg-5a4bd0e5-a0"
            tag: 2
            Interface "qg-5a4bd0e5-a0"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qr-91d9970c-ef"
            tag: 1
            Interface "qr-91d9970c-ef"
                type: internal
        Port br-int
            Interface br-int
                type: internal
    Bridge br-ex
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    Bridge br-infra
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-infra
            Interface br-infra
                type: internal
        Port phy-br-infra
            Interface phy-br-infra
                type: patch
                options: {peer=int-br-infra}
    ovs_version: "2.9.2"

The related RESTCONF request:

curl -v --request GET \
  --url http://127.0.0.1:8888/restconf/data/network-topology:network-topology/topology=ovsdb%3A1 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Accept: application/json' \
  | python -m json.tool

The example output of the request can be found here. 

4. Retrieve specific node from OVSDB topology data (node-id: “ovsdb://HOST1”)

curl -v --request GET \
  --url http://127.0.0.1:8888/restconf/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2FHOST1 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Accept: application/json' \
  | python -m json.tool

Since there’s only one OVSDB topology node in the SDN controller, the output contains the same data as in case of the previous request.

5. Retrieve OVSDB data of specific bridge (br-int):

curl -v --request GET \
--url http://127.0.0.1:8888/restconf/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2FHOST1%2Fbridge%2Fbr-int \
--header 'Authorization: Basic YWRtaW46YWRtaW4=' \
--header 'Accept: application/json' \
| python -m json.tool

This request returns only a subset of data related to the bridge br-int.

6. Setup SDN controller as OpenFlow controller for bridge br-int:

curl -v --request PUT \
  --url http://localhost:8888/restconf/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2FHOST1%2Fbridge%2Fbr-int \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Content-Type: application/json' \
  --data '{
            "network-topology:node": [
                  {
                    "node-id": "ovsdb://HOST1/bridge/br-int",
                       "ovsdb:bridge-name": "br-int",
                        "ovsdb:controller-entry": [
                          {
                            "target": "tcp:10.14.0.160:6633"
                          }
                        ]
                  }
              ]
          }'

 7. Check the state of the OpenFlow connection – retrieve the controller-entry list of br-int:

curl -v --request GET \
  --url http://localhost:8888/restconf/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2FHOST1%2Fbridge%2Fbr-int/controller-entry=tcp%3A10.14.0.160%3A6633 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Accept: application/json' \
  | python -m json.tool

The GET request above only retrieves OVSDB data of OpenFlow connection between Open vSwitch and SDN controller. The output only contains one entry (if the OpenFlow connection is established then the item “is-connected” is set to true):

{
    "ovsdb:controller-entry": [
        {
            "controller-uuid": "5bfe55c9-70da-4e3b-b1ff-c6ecc8b6e62c",
            "is-connected": true,
            "target": "tcp:10.14.0.160:6633"
        }
    ]
}

In the output of ovs-vsctl show command or previous get requests to OVSDB, you will also see a connection between Open vSwitch and Neutron service:

"ovsdb:controller-entry": [
    {
        "target": "tcp:127.0.0.1:6633",
        "controller-uuid": "57b4a453-5ee5-40ea-953a-4132319ad1eb",
        "is-connected": true
    },
    {
        "target": "tcp:10.14.0.160:6633",
        "controller-uuid": "bc0af587-fc76-44d9-ab24-fe926b1099e6",
        "is-connected": true
    }
],

8. Retrieve OpenFlow network topology:

curl -v --request GET \
--url http://127.0.0.1:8888/restconf/data/network-topology:network-topology/topology=flow%3A1 \
--header 'Authorization: Basic YWRtaW46YWRtaW4=' \
--header 'Accept: application/json' \
| python -m json.tool

Here’s an example of the output, where you can find a node-id of the Open vSwitch instance. This can be used in subsequent requests.

9. Retrieve OpenFlow data of all nodes (reply includes also OpenFlow flow tables):

curl -v --request GET \
  --url http://127.0.0.1:8888/restconf/data/opendaylight-inventory:nodes \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Accept: application/json' \
  | python -m json.tool

Here’s is the output from our example.

10. Retrieve OpenFlow data of a specific node (reply includes also OpenFlow flow tables):

This request uses node-id in the URL. The node-id can be found in reply to requests 8. and 9.

curl -v --request GET \
  --url http://127.0.0.1:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A143423481343818 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Accept: application/json' \
  | python -m json.tool

11. Retrieve specific OpenFlow table of specific node:

curl -v --request GET \
--url http://127.0.0.1:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A143423481343818/table=0 \
--header 'Authorization: Basic YWRtaW46YWRtaW4=' \
--header 'Accept: application/json' \
| python -m json.tool

12. Delete OpenFlow controller connection from Open vSwitch configuration of bridge br-int:

curl -v --request DELETE \
--url http://localhost:8888/restconf/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2FHOST1%2Fbridge%2Fbr-int/controller-entry=tcp%3A10.14.0.160%3A6633 \
--header 'Authorization: Basic YWRtaW46YWRtaW4='

13. Close OVSDB connection to the Open vSwitch instance:

curl -v --request DELETE \
--url http://127.0.0.1:8888/restconf/data/network-topology:network-topology/topology=ovsdb%3A1/node=ovsdb%3A%2F%2FHOST1 \
--header 'Authorization: Basic YWRtaW46YWRtaW4='

OpenFlow manager (OFM)

For this setup, you can also use the OFM application, which provides GUI for management of OpenFlow switches. You can connect OFM to the controller and retrieve OpenFlow tables of a specific switch – plus the flows will be graphically displayed. You can also modify existing flows, or add new using OFM. See our blog about OpenFlow integration.

Conclusion

In this blog, we have described the use of an SDN controller with OVSDB and OpenFlow support. It can be used to manage Open vSwitch virtual switches. We have used Open vSwitch running in an OpenStack environment and we described the sequence of requests needed to connect SDN controller to OVSDB and OpenFlow interfaces of Open vSwitch.

This approach can be used to manage Open vSwitch instances, running in OpenStack Network nodes and Compute nodes without breaking the connection between Neutron service and the Open vSwitch instances. But the SDN controller example and described requests can be used with any virtual or physical network device supporting OpenFlow or OVSDB or both protocols.

The usage of RESTCONF SB plugin in the SDN controller means, that any application can implement RESTCONF (HTTP/REST/API) client and communicate with the controller, as we have demonstrated by Postman application and OpenFlow manager (OFM) in the previous blog.

[lighty.io] OpenFlow Integration

lighty.io 9.2.x provides examples of OVSDB & OpenFlow SDN controllers for integration with your SDN solution. Those examples will guide you through lighty.io controller core initialization, with OVSDB and/or OpenFlow southbound plugins attached. You can use those management protocols with really small memory footprint and simple runtime packaging.

Today, we will show you how to run and integrate the OpenFlow plugin in lighty.

What is OpenFlow?

OpenFlow (OF) is a communications protocol, that gives access to the forwarding plane of a network switch or router over the network. OpenFlow can be applied for:

  • Quality of Service measurement by traffic filtering
  • Network monitoring (i.e., using the controller as a monitoring device)

In a virtual networking context, OF can be used to program virtual switches with tenant level segregation tags (for example VLANs). In the context of NFV, OF can be used to re-direct traffic to a chain of services. It is managed by the Open Networking Foundation.

Why do we need OpenFlow?

Routers and switches can achieve various (limited) levels of user programmability. However, engineers and managers need more than the limited functionality of this hardware. OpenFlow achieves consistent traffic management and engineering exactly for these needs. This is achieved by controlling the functions independently of the hardware used.

PANTHEON.tech has managed to implement the OpenFlow plugin in lighty-core. Today, we will show you how you can run the plugin yourself.

Prerequisites

In order to build and install lighty-core artifacts locally, follow the procedure below:

  1. Install JDK – make sure JDK 8 is installed
  2. Install maven – make sure you have Maven 3.5.0 or later installed
  3. Setup maven – make sure you have proper settings.xml in your ~/.m2 directory
  4. Download/clone lighty-core
  5. (Optional) Download/clone the OpenFlow Manager App

Build and Run OpenFlow plugin example

1. Download the lighty-core repository:

git clone https://github.com/PantheonTechnologies/lighty-core.git

2. Checkout lighty-core version 9.2.x:

git checkout 9.2.x

3. Build the lighty-core project with the maven command:

mvn clean install

This will download all important dependencies and create .zip archive in ‘lighty-examples/lighty-community-restconf-ofp-app’ directory.

Extract this archive and start ‘start-ofp.sh’ script or run .jar file using Java 8 with the command:

java -jar lighty-community-restconf-ofp-app-9.2.1-SNAPSHOT.jar

Use custom config files

The previous command will run the application with default configuration. In order to run it with a custom configuration, edit (or create new) JSON configuration file. Example of JSON configuration can be found in lighty-community-restconf-ofp-app-9.2.1-SNAPSHOT folder.

With the previous command to start the application, pass the path to the configuration file as an argument:

java -jar lighty-community-restconf-ofp-app-9.2.1-SNAPSHOT.jar sampleConfigSingleNode.json

OpenFlow plugin and RESTCONF Configuration

An important configuration, which decides what can be changed, is stored in sampleConfigSingleNode.json. For applying all changes, we need to start OFP with this configuration as a Java parameter.

ForwardingRulesManager

FLOWs can be added to OpenFlow Plugin (OFP) in two ways:

  • Sending FLOW to config data-store. ForwardingRulesManager (FRM) is listening to the config-data store. When it’s changed. the config data store FRM will sync changes on the connected device, once available.
    Flow added this way is persistent.
  • Sending a RPC message directly to device. This option works without FRM. When is device restarted, then this the configuration will disappear.

In case you need to disable FRM, start OFP with an external configuration & set the enableForwardingRulesManager to false. Then, simply start OFP with this external configuration.

RESTCONF

RESTCONF configuration can be changed in the JSON config file sampleConfigSingleNode.json, mentioned above. It is possible to change the RESTCONF port, IP address or version of RESTCONF. Currently, the version of RESTCONF set to DRAFT_18, but it can be set to DRAFT_02.

"restconf": {
    "httpPort": 8888,
    "restconfServletContextPath": "/restconf",
    "inetAddress": "0.0.0.0",
    "jsonRestconfServiceType": "DRAFT_18"
  },

How to start the OpenFlow example

Firstly, start the OpenFlow example application.

Make sure to read through this guide on how to install mininet.

Next step is to start mininet, with at least one Open vSwitch (use version 2.2.0 and higher).

sudo mn --controller=remote,ip=<IP_OF_RUNNING_LIGHTY> --topo=tree,1 --switch ovsk,protocols=OpenFlow13

For this explanation of OFP usage, RESTCONF is set to DRAFT_18. All RESTCONF calls used in this example can be imported from file OFP_postman_collection.json, in project resources to Postman.

We will quickly check if the controller is the owner of the connected device. If not, then the controller is not running or  the device is not properly connected:

curl --request GET \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/entity-owners:entity-owners \
  --header 'Authorization: Basic YWRtaW46YWRtaW4='

If you followed the instructions and there is a single controller running (not cluster) and only one device connected, the result is:

{
  "entity-owners": {
    "entity-type": [{
      "type": "org.opendaylight.mdsal.ServiceEntityType",
      "entity": [{
        "id": "/odl-general-entity:entity[name='openflow:1']",
        "candidate": [{
          "name": "member-1"
        }],
        "owner": "member-1"
      }]
    }, {
      "type": "org.opendaylight.mdsal.AsyncServiceCloseEntityType",
      "entity": [{
        "id": "/odl-general-entity:entity[name='openflow:1']",
        "candidate": [{
          "name": "member-1"
        }],
        "owner": "member-1"
      }]
    }]
  }
}

Let’s get information about the connected device. If you want to see all OFP inventory, use ‘get inventory‘ call from PostmanCollection.

From config:

curl -k --insecure --request GET \
  --url http:///<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A1 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4='

From operational:

curl -k --insecure --request GET \
  --url http:///<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A1?content=nonconfig \
  --header 'Authorization: Basic YWRtaW46YWRtaW4='

JSON result starts with:

{
  "node": [{
    "id": "openflow:1",
    "node-connector": [{
      "id": "openflow:1:LOCAL",
      "flow-node-inventory:peer-features": "",
      "flow-node-inventory:advertised-features": "",
      "flow-node-inventory:port-number": 4294967294,
      "flow-node-inventory:hardware-address": "4a:15:31:79:7f:44",
      "flow-node-inventory:supported": "",
      "flow-node-inventory:current-speed": 0,
      "flow-node-inventory:current-feature": "",
      "flow-node-inventory:state": {
        "live": false,
        "link-down": true,
        "blocked": false
      },
      "flow-node-inventory:maximum-speed": 0,
      "flow-node-inventory:name": "s1",
      "flow-node-inventory:configuration": "PORT-DOWN"
    }, {
      "id": "openflow:1:2",
      "flow-node-inventory:peer-features": "",
      "flow-node-inventory:advertised-features": "",
      "flow-node-inventory:port-number": 2,
      "flow-node-inventory:hardware-address": "fa:c3:2c:97:9e:45",
      "flow-node-inventory:supported": "",
      "flow-node-inventory:current-speed": 10000000,
      "flow-node-inventory:current-feature": "ten-gb-fd copper",
      "flow-node-inventory:state": {
        "live": false,
        "link-down": false,
        "blocked": false
      },
      .
      .
      .

Add FLOW

Now try to add table-miss flow, which will modify the switch to send all not-matched-packets to the controller via packet-in messages.

To config data-store:

curl --request PUT \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A1/table=0/flow=1 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Content-Type: application/xml' \
  --data '<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
   <barrier>false</barrier>
   <cookie>54</cookie>
   <flags>SEND_FLOW_REM</flags>
   <flow-name>FooXf54</flow-name>
   <hard-timeout>0</hard-timeout>
   <id>1</id>
   <idle-timeout>0</idle-timeout>
   <installHw>false</installHw>
   <instructions>
       <instruction>
           <apply-actions>
               <action>
                   <output-action>
                       <max-length>65535</max-length>
                       <output-node-connector>CONTROLLER</output-node-connector>
                   </output-action>
                   <order>0</order>
               </action>
           </apply-actions>
           <order>0</order>
       </instruction>
   </instructions>
   <match/>
   <priority>0</priority>
   <strict>false</strict>
   <table_id>0</table_id>
</flow>'

Directly to a device via RPC call:

curl --request POST \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/operations/sal-flow:add-flow \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Content-Type: application/json' \
  --data '{
    "input": {
      "opendaylight-flow-service:node":"/opendaylight-inventory:nodes/opendaylight-inventory:node[opendaylight-inventory:id='\''openflow:1'\'']",
      "priority": 0,
      "table_id": 0,
      "instructions": {
        "instruction": [
          {
            "order": 0,
            "apply-actions": {
              "action": [
                {
                  "order": 0,
                  "output-action": {
                    "max-length": "65535",
                    "output-node-connector": "CONTROLLER"
                  }
                }
              ]
            }
          }
        ]
      },
      "match": {
      }
    }
}

Get FLOW

Check if flow is in data-store:

In the config:

curl --request GET \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A1/table=0 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4='

In operational:

curl --request GET \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A1/table=0?content=nonconfig \
  --header 'Authorization: Basic YWRtaW46YWRtaW4='

Result:

{
    "flow-node-inventory:table": [
        {
            "id": 0,
            "opendaylight-flow-table-statistics:flow-table-statistics": {
                "active-flows": 1,
                "packets-looked-up": 14,
                "packets-matched": 4
            },
            "flow": [
                {
                    "id": "1",
                    "priority": 0,
                    "opendaylight-flow-statistics:flow-statistics": {
                        "packet-count": 4,
                        "byte-count": 280,
                        "duration": {
                            "nanosecond": 936000000,
                            "second": 22
                        }
                    },
                    "table_id": 0,
                    "cookie_mask": 0,
                    "hard-timeout": 0,
                    "match": {},
                    "cookie": 54,
                    "flags": "SEND_FLOW_REM",
                    "instructions": {
                        "instruction": [
                            {
                                "order": 0,
                                "apply-actions": {
                                    "action": [
                                        {
                                            "order": 0,
                                            "output-action": {
                                                "max-length": 65535,
                                                "output-node-connector": "CONTROLLER"
                                            }
                                        }
                                    ]
                                }
                            }
                        ]
                    },
                    "idle-timeout": 0
                }
            ]
        }
    ]
}

Get FLOW directly from modified device s1 in the command line:

sudo ovs-ofctl -O OpenFlow13 dump-flows s1

Device result:

cookie=0x36, duration=140.150s, table=0, n_packets=10, n_bytes=700, send_flow_rem priority=0 actions=CONTROLLER:65535

Update FLOW

This works the same as add flow. OFP will find openflow:1, table=0, flow=1 from url and update changes from the body.

Delete FLOW

From config data-store:

curl --request DELETE \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A1/table=0/flow=1 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Content-Type: application/xml' \
  --data ''

Via RPC calls:

curl --request POST \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/operations/sal-flow:remove-flow \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Content-Type: application/json' \
  --data '{
    "input": {
      "opendaylight-flow-service:node":"/opendaylight-inventory:nodes/opendaylight-inventory:node[opendaylight-inventory:id='\''openflow:1'\'']",
      "table_id": 0
    }
}'

Proactive flow installation & traffic monitor via Packet-In messages

In order to create traffic, we need to setup topology behavior. There are three methods for flow table population (Reactive, Proactive, Hybrid). We encourage you to read more about their differences.

In our example, we use Proactive flow installation, which means that we proactively create flows before traffic started.

  1. Start lighty OpenFlow example lighty-community-restconf-ofp-app
  2. Create mininet with linear topology and 2 open vSwitches
sudo mn --controller=remote,ip=<IP_OF_RUNNING_LIGHTY>:6633 --topo=linear,2 --switch ovsk,protocols=OpenFlow13

Now we tested that connection between devices is not established:

mininet> pingall
*** Ping: testing ping reachability
h1 -> X
h2 -> X
*** Results: 100% dropped (0/2 received)

Next step is to create flows, that create a connection between switches. This will be managed by setting Matched and Action filed in Table 0. In this example, we connect two ports eth1 and eth2 in a switch together. So everything that comes from port eth1, is redirected to port eth2 and vice versa.

Next configuration will be sending monitoring packets in a network. This configuration has set the switch s1 to send all packets, which came to port eth2, to the controller. So the visualized result should look like this:

1. Add flow to switch s1 (named in OFP as ‘openflow:1’), which redirect all traffic that comes from port eth1 (in OFP named as ‘1’) to port eth2 (in OFP named as ‘2’).

curl --request PUT \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A1/table=0/flow=0 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Content-Type: application/json' \
  --data '{
      "flow": [
          {
              "table_id": "0",
              "id": "0",
              "priority": "10",
              "match": {
                  "in-port": "openflow:1:1"
              },
              "instructions": {
                  "instruction": [
                      {
                          "order": 0,
                          "apply-actions": {
                              "action": [
                                  {
                                      "order": 0,
                                      "output-action": {
                                          "output-node-connector": "2",
                                          "max-length": "65535"
                                      }
                                  }
                              ]
                          }
                      }
                  ]
              }
          }
      ]
}'

2. Add flow to switch s1, which will connect port 2 and 1 in another direction. Set switch s1 to send all packet transmitted through port 2 to controller:

curl --request PUT \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A1/table=0/flow=1 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Content-Type: application/json' \
  --data '{
    "flow": [
        {
            "table_id": "0",
            "id": "1",
            "priority": "10",
            "match": {
                "in-port": "openflow:1:2"
            },
            "instructions": {
                "instruction": [
                    {
                        "order": 0,
                        "apply-actions": {
                            "action": [
                                {
                                    "order": 0,
                                    "output-action": {
                                        "output-node-connector": "1",
                                        "max-length": "65535"
                                    }
                                },
                                {
                                    "order": 1,
                                    "output-action": {
                                        "output-node-connector": "CONTROLLER",
                                        "max-length": "65535"
                                    }
                                }
                            ]
                        }
                    }
                ]
            }
        }
    ]
}'

3. Check all added flows at s1 switch:

{
    "flow-node-inventory:table": [
        {
            "id": 0,
            "opendaylight-flow-table-statistics:flow-table-statistics": {
                "active-flows": 1,
                "packets-looked-up": 317,
                "packets-matched": 273
            },
            "flow": [
                {
                    "id": "1",
                    "priority": 10,
                    "opendaylight-flow-statistics:flow-statistics": {
                        "packet-count": 0,
                        "byte-count": 0,
                        "duration": {
                            "nanosecond": 230000000,
                            "second": 5
                        }
                    },
                    "table_id": 0,
                    "cookie_mask": 0,
                    "hard-timeout": 0,
                    "match": {
                        "in-port": "openflow:1:2"
                    },
                    "cookie": 0,
                    "flags": "",
                    "instructions": {
                        "instruction": [
                            {
                                "order": 0,
                                "apply-actions": {
                                    "action": [
                                        {
                                            "order": 0,
                                            "output-action": {
                                                "max-length": 65535,
                                                "output-node-connector": "1"
                                            }
                                        },
                                        {
                                            "order": 1,
                                            "output-action": {
                                                "max-length": 65535,
                                                "output-node-connector": "CONTROLLER"
                                            }
                                        }
                                    ]
                                }
                            }
                        ]
                    },
                    "idle-timeout": 0
                }
            ]
        }
    ]
}

 

4. Add flow to switch s2 to connect port 1 and 2:

curl --request PUT \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A2/table=0/flow=0 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Content-Type: application/json' \
  --data '{
    "flow": [
        {
            "table_id": "0",
            "id": "0",
            "priority": "10",
            "match": {
                "in-port": "openflow:2:1"
            },
            "instructions": {
                "instruction": [
                    {
                        "order": 0,
                        "apply-actions": {
                            "action": [
                                {
                                    "order": 0,
                                    "output-action": {
                                        "output-node-connector": "2",
                                        "max-length": "65535"
                                    }
                                }
                            ]
                        }
                    }
                ]
            }
        }
    ]
}'

5. Check all added flow at s2 switch:

{
    "flow-node-inventory:table": [
        {
            "id": 0,
            "opendaylight-flow-table-statistics:flow-table-statistics": {
                "active-flows": 2,
                "packets-looked-up": 294,
                "packets-matched": 274
            },
            "flow": [
                {
                    "id": "0",
                    "priority": 10,
                    "opendaylight-flow-statistics:flow-statistics": {
                        "packet-count": 0,
                        "byte-count": 0,
                        "duration": {
                            "nanosecond": 388000000,
                            "second": 7
                        }
                    },
                    "table_id": 0,
                    "cookie_mask": 0,
                    "hard-timeout": 0,
                    "match": {
                        "in-port": "openflow:2:1"
                    },
                    "cookie": 0,
                    "flags": "",
                    "instructions": {
                        "instruction": [
                            {
                                "order": 0,
                                "apply-actions": {
                                    "action": [
                                        {
                                            "order": 0,
                                            "output-action": {
                                                "max-length": 65535,
                                                "output-node-connector": "2"
                                            }
                                        }
                                    ]
                                }
                            }
                        ]
                    },
                    "idle-timeout": 0
                },
                {
                    "id": "1",
                    "priority": 10,
                    "opendaylight-flow-statistics:flow-statistics": {
                        "packet-count": 0,
                        "byte-count": 0,
                        "duration": {
                            "nanosecond": 98000000,
                            "second": 3
                        }
                    },
                    "table_id": 0,
                    "cookie_mask": 0,
                    "hard-timeout": 0,
                    "match": {
                        "in-port": "openflow:2:2"
                    },
                    "cookie": 0,
                    "flags": "",
                    "instructions": {
                        "instruction": [
                            {
                                "order": 0,
                                "apply-actions": {
                                    "action": [
                                        {
                                            "order": 0,
                                            "output-action": {
                                                "max-length": 65535,
                                                "output-node-connector": "1"
                                            }
                                        }
                                    ]
                                }
                            }
                        ]
                    },
                    "idle-timeout": 0
                }
            ]
        }
    ]
}

Now, when we try to ping all devices in mininet, we will receive positive feedback:

mininet> pingall
*** Ping: testing ping reachability
h1 -> h2
h2 -> h1
*** Results: 0% dropped (2/2 received)

Show Packet-In messages with Wireshark

Wireshark is a popular network protocol analyzer. It lets you analyze everything that is happening in your network and is a necessity for network administrators and power-users alike.

Start Wireshark with the command:

sudo wireshark

After it starts, double click on ‘any’ filter. Then, filter packets with ‘openflow_v4.type == 10‘. Now, the Wireshark environment setup will only show Packet-In messages from OpenFlow protocol.

To create traffic in mininet network, we used mininet command:

h2 ping h1

If everything is setup correctly, then we can see Packet-In messages showing up.

Show Packet-In messages with PacketInListener

In the below section for Java developers, there is an option for setting Packet-In Listener. This configuration set to log every Packet-In message as a TRACE log to console. When this is done, run mininet command ‘h2 ping h1’ again.

If everything is set up correctly, then we can see Packet-In messages in logs.

Java DevelopersSome configuration can be done in Java Main class from OpenFlow Protocol example.

Packet-in listener

Packet handling is managed by adding a Packet Listener. In our example, we add the PacketInListener class, which will be logging packet-in messages. For a new packet listener class, it is important to have implemented the class “PacketProcessingListener”.

The first step is to create an instance of PacketInListener (1 – in the window below). Then we will add OpenflowSouthboundPluginBuilder to this part of the code (2 – in the window below).

//3. start openflow SBP
     PacketProcessingListener packetInListener = new PacketInListener();           (1)
     final OpenflowSouthboundPlugin plugin;
     plugin = new OpenflowSouthboundPluginBuilder()
             .from(configuration, lightyController.getServices())
             .withPacketListener(packetInListener)                                 (2)
             .build();
     ListenableFuture<Boolean> start = plugin.start();

OpenFlow Manager (OFM) App

OpenFlow Manager (OFM) is an application developed to run on top of ODL/Lighty to visualize OpenFlow (OF) topologies, program OF paths, gather OF stats and for managing flow tables. In order to install the OFM App, follow these steps:

1. Download OFM repositories from GitHub and checkout master branch

git clone https://github.com/PantheonTechnologies/OpenDaylight-Openflow-App.git
git checkout master

2. NGINX installation

NGINX is used to serve as a proxy server towards OFM application and ODL/lighty RESTCONF interface. Before NGINX starts, it is important to the switch NGINX config file in /etc/nginx/sites-enabled/ with the default file in the root of this project.

In this default file, we have set up the NGINX port, port for RESTCONF and Grunt port. Please be sure that these ports are correct.

After replacing the config file, start NGINX with the command:

sudo apt install nginx

If  you need to stop NGINX, type in the command:

sudo systemctl stop nginx

3. OFM configuration
Before running the OFM standalone application, it is important to configure:

  • The controller base URL
  • NGINX port
  • ODL username and ODL/lighty password

All this information should be written in env.module.js file located in the directory ofm src/common/config.

4. Grunt installation
To run OFM standalone app on local web server, you can use tool Grunt. For this tool is everything prepared in the OFM repository. Grunt is installable via npm, the Node.js package manager.After running grunt and NGINX you can access OFM standalone app via web browser on used NGINX port typing URL localhost:9000.

After running Grunt and NGINX, you can access OFM standalone app via a web browser on a used NGINX port by typing the URL localhost:9000.

OpenFlow Manager environment

With OFM, we can also start the Lighty-OpenFlow Southbound plugin example. From lighty-core repository, follow the example above. In this example, we will use mininet to simulate network topology, start with the command:

sudo mn --controller=remote,ip=<IP_OF_RUNNING_LIGHTY>:6633 --topo=linear,2 --switch ovsk,protocols=OpenFlow13

If everything is set up correctly, then you should see a basic view of the network topology:

Device management

To see detailed device information, select those devices that you want to inspect. Then click on the main menu bar, at the top of “Flow management” section. Now, you should see device information and added Flows.

In a Flow section, you can Add Flow by clicking on the pen on the left-top side of the table marked by an arrow. In the right side of table at each row, you can delete or update selected flow.

Adding Flows

Adding a flow can be done from the Flow management section by clicking on the pen, on the left-top side of Flow table. Here you can choose the switch, from where the flow should be sent. Then just click on the property that should be added in the flow, from the left menu.

After filling all required fields, you can view your flow as a JSON message, by clicking on the “show preview” button. For sending flows to a switch, click on “Send message” button.

Statistics

To show statistics in the network click on the “Statistics” section in the main menu bar.  Then, choose from the drop-down menu what statistic you want to see and click on “Refresh data” button.

Conclusion

We have managed to integrate the whole OpenFlow plugin into lighty-core. This is the same way it is implemented in OpenDaylight, version Fluorine (SR1). It is, therefore, possible to use lighty-controller for managing network devices via the OpenFlow protocol.

If you would like to see how our commercial version might benefit your business, please visit lighty.io for more information, or contact us here.

PANTHEON.tech @ MWC 2019 in Barcelona

PANTHEON.tech‘s Denis Rasulev visited Barcelona for the annual Mobile World Conference in Barcelona. Here are his thoughts on the event.

I was thrilled to visit MWC from the beginning. Such a huge and renowned event always shows the latest tech not only in the mobile sector. This years themes and keywords could be summarized to three cores: 5G, IoT & AI.

First of all, I would like to point out how well the event was organized. I was greeted with my badge right at the airport, with directions towards the conference being provided by helpful volunteers. After settling in Barcelona, I set off to start my day early and arrive at the MWC at 8 AM.

I soon regretted being an early bird, since most booths were closed at this time and most presenters were just settling in. The Fira Gran Via, designed by Japanese architect Toyo Ito, was monumental and emphasized the futuristic approach of the conference. Saying the venue was monumental is an understatement – I was only able to visit 2/3 of all booths on the first day. In total,

I have made 89,109 steps throughout MWC 2019.

Themes @ MWC 2019

Some booths were both impressive and beautiful, either taking up several hundred square-meters of space. Some booths even had two floors, just to underline the massiveness of the event. The venue was packed with attendees since morning, but it was easy to talk to presenters and orientate in each pavilion.

With each day and conference, we can feel that 5G is coming closer to consumers and real-life deployment. What seemed like a wild, unreal idea a few years ago is now on the way of dictating the future of each new technology. I saw remote road assistance which was possible due to lightning-fast 5G utilization. Healthcare could become fully automated or remotely controlled. Again, this is due to 5G coverage.

I am glad, that PANTHEON.tech is making sure to stay on point of this revolution and keep up.

The future of our industry

There were prototypes of robots, which would make coffee shops with humans obsolete. It would take your order via voice recognition and prepare your order. Another robot has perfectly built paper-planes with inhumane precision. Some of these products seemed like a toy for playing around. But we have to remember that this is what makes greatness – testing, thinking out-of-the-box and creating a functional concept. It was wonderful to see, that startups had an opportunity to also present themselves. Not only to attendees, but to potential investors as well.

I held the future in my hands, in form of the Barefoot Networks’ Toffino 2 chip. I was able to see the first functional, foldable phone by Samsung. But most importantly, I was able to see how the future will be shaped in the coming years.

MWC is a must-go, powerful event with great networking opportunities. Trust me, you want to be there. In the future, I will definitely require a larger team to cover more ground at the next Mobile World Conference. Hopefully, PANTHEON.tech will see you there!

Vector Packet Processing 104: gRPC & REST

Welcome back to our Vector Packet Processing implementation guide, Part 4.

Today, we will go through the essentials of gRPC and REST and introduce their core concepts, while introducing one missing functionality into our VPP build. This part will also introduce the open-source GoVPP, Go language bindings for binary API – VPP management.

The Issue

Natively, VPP does not include a gRPC / RESTful interface. PANTHEON.tech has developed an internal solution, which utilizes a gRPC-gateway to VPP, using GoVPP. It is called VPP-RPC, through which you can now connect to VPP using a REST client. 

In case you are interested in this solution, please contact us via our website or contact us at sales@pantheon.tech.


Introduction

First and foremost, here are the terms that you will come across in this guide:

  • gRPC (Remote Procedure Calls) is a remote procedure call (RPC) system initially developed by Google. It uses HTTP/2 for transport, protocol buffers as the interface description language, and provides features such as authentication, bidirectional streaming and flow control, blocking or non-blocking bindings, and cancellation and timeouts. It generates cross-platform client and server bindings for many programming-languages.
  • gRPC-gateway (GRPC-GW) is a plugin of protobuf. It reads the gRPC service definition and generates a reverse-proxy server which translates a RESTful JSON API into gRPC. This server is generated according to custom options in your gRPC definition.
  • VPP-Agent is aset of VPP-specific plugins that interact with Ligato, in order to access services within the same cloud. VPP Agent provides VPP functionality to client apps through a VPP model-driven API
  • VPP-RPC is our new RESTful VPP service. It utilizes gRPC & gRPC-gateway as 2 separate processes, in order to facilitate communication with VPP through GoVPP.

JSON message sequence

gRPC gateway exposes the REST service, for which there is no built-in support in VPP. By using gRPC-gateway and gRPC server services, the VPP API is available through gRPC and REST at the same time. Both services use generated models from VPP binary APIs, therefore exposing both APIs is easy enough to include both. It is up to the user to choose, which mechanism they will use.

When exchanging data between a browser and a server, the data can only be text. Any JavaScript object can be converted into a JSON and sent to the server. This allows us to process data as JavaScript objects, while avoiding complicated translations and parsing issues.

The sequence diagram below describes the path the original JSON message takes in our solution:

  1. The Client sends a HTTP request to GRPC-GW
  2. GRPC-GW transforms the JSON into protobuf message & sends it to the gRPC server
  3. Each RPC has a go method handling its logic. For unary RPCs, this simply means copying the protobuf message into the corresponding VPP message structure and passing it to GoVPP binary API
  4. GoVPP eventually passes the message to the underlying VPP binary API, where the desired functionality is executed

 

JSON message sequence diagram

VPP API build process

The figure below describes the build process for a single VPP API. Lets see what needs to happen for a tap API:

Build process for a VPP API. @PANTHEON.tech

VPP APIs are defined in /usr/share/VPP/api/ which is accessible after installing VPP. The Go package tap is a generated via the VPP Agent binary API of the ‘tap’ VPP module. It is generated from this file: tap.api.json.

This file will drive the creation of all 3 building blocks:

  • GoVPP’s binapi-generator generates the GoVPP file tap.ba.go
  • vpp-rpc-protogen generates tap.proto, which contains the gRPC messages and services, including the URL for each RPC
  • protoc’s go plugin will compile the proto file and create a gRPC stub tap.pb.go, containing client & server interfaces that define RPC methods and protobuf message structs.
  • Vpp-rpc-implementor will generate the code that implements the TapServer interface – the actual RPC methods calling GoVPP APIs – in Tap.server.go file.
  • protoc’s gRPC-gateway plugin will compile the proto file and create the reverse proxy tap.pb.gw.go. We don’t have to touch this file further.

Running the examples

To run all the above, we need to make sure all these processes are running:

  • VPP service
  • gRPC server (needs root privileges)

    sudo vpp-rpc-server
  • gRPC gateway

    vpp-rpc-gateway

Running a simple request using Curl

If we want to invoke the API we have created, we can use Curl. Vpe will require an URL (vpe/showversion/request), which will map the above mentioned API to VPP’s binary API. We will now use Curl to POST a request to a default address of localhost:8080:

curl -H "Content-Type: application/json" -X POST 127.0.0.1:8080/Vpe/ShowVersion/request --silent

{"Retval":0,"Program":"dnBlAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=","Version":"MTguMDctcmMwfjEyOC1nNmYxYzQ4ZAAAAAAAAAAAAAA=",
"BuildDate":"xaB0IG3DoWogIDMgMTQ6MTA6NTQgQ0VTVCAyMDE4AAA=",
"BuildDirectory":"L2hvbWUvcGFsby92cHA"}

The decoded reply says:

{
    "Retval": 0,
    "Program": "vpe",
    "Version": "18.07-rc0~128-g6f1c48d",
    "BuildDate": "Thu May  3 14:10:54 CEST 2018",
    "BuildDirectory": "/home/user/vpp"
}

Postman collection

We provide a Postman collection within our service, which serves as a starting point for users with our solution. The collection created in the vpp-rpc repository, tests/vpp-rpc.postman_collection.json path, contains various requests and subsequent tests to the Tap module.

Performance analysis in Curl

Curl can give us detailed timing analysis of a request’s performance. If we run the previous request 100 times, the summary times (in milliseconds) we get usually are:

mean=4.82364000000000000
min=1.047
max=23.070
average rr per_sec=207.31232015656226418223
average rr per_min=12438.73920939373585093414
average rr per_hour=746324.35256362415105604895

Judging from the graph below, we see that most of the requests take well below the average mark. Profiling our solution, we’ve found the reason for the anomalies (above 10ms) to be caused by GoVPP itself when waiting on the reply from VPP. This behavior is well documented on GoVPP wiki. We can conclude our solution closely mirrors the performance  of the synchronous GoVPP APIs.

Here is the unary RPC total time in ms:

 

In conclusion, we have introduced ourselves to the concepts of gRPC and have run our VPP API + GoVPP build, with a REST service feature. Furthermore, we have shown you our in-house solution VPP-RPC, which facilitates the connection between the API and GoVPP.

If you would like to inquire about this solution, please contact us for more information.

In the last part of this series, we will take a closer look at the gNMI service and how we can benefit from it.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

Vector Packet Processing 103: Ligato & VPP Agent

Welcome back to our guide on Vector Packet Processing. In today’s post number three from our VPP series, we will take a look at Ligato and the its VPP Agent.

Ligato is one of multiple commercially supported technologies supported by PANTHEON.tech.

What is a VNF?

A Virtual Network Function is a software implementation of a function. It runs on single or multiple Virtual Machines or Containers, on top of a hardware networking infrastructure. Individual functions of this network may be implemented or combined together, in order to create a complete networking communication service. A VNF can be used as a standalone entity or part of an SDN architecture.

Its life-cycle is controlled by orchestration systems, such as the increasingly popular Kubernets. Cloud-native VNFs and their control/management plane can expose REST or gRPC APIs to external clients, communicate over a message bus or provide a cloud-friendly environment for deployment and usage. It can also support high-performance data planes, such as VPP.

What is Ligato?

It is the open source cloud platform for building and wiring VNFs. Ligato provides infrastructure and libraries, code samples and CI/CD process to accelerate and improve the overall developer experience. It paves the way towards faster code reuse, reducing costs and increasing application agility & maintainability. Being native to the cloud, Ligato has a minimal footprint, plus can be easily integrated, customized and extended, deployed using Kubernetes. The three main components of Ligato are:

  • CN Infra – a Golang platform for developing cloud-native microservices. It can be used to develop any microservice, even though it was primarily designed for Virtual Network Function management/control plane agents.
  • SFC Controller – an orchestration module for data-plane connectivity within cloud-native containers. These containers may be VPP-Agent enabled or communicate via veth interfaces.
  • BGP Agent – a Border Gateway Protocol information provider. You can also view a Ligato demonstration done by PANTHEON.tech here.

The platform is modular-based – new plugins provide new functionality. These plugins can be setup in layers, where each layer can form a new platform with different services at a higher layer plane. This approach mainly aims to create a management/control plane for VPP, with the addition of the VPP Agent.

What is the VPP Agent?

The VPP Agent is a set of VPP-specific plugins that interact with Ligato, in order to access services within the same cloud. VPP Agent provides VPP functionality to client apps through a VPP model-driven API. External and internal clients can access this API, if they are running on the same CN-Infra platform, within the same Linux process. 

Quickstarting the VPP Agent

For this example, we will work with the pre-built Docker image.

Install & Run

  1. Run the downloaded Docker image:

    docker pull ligato/vpp-agent
    docker run -it --name vpp --rm ligato/vpp-agent
  2. Using agentctl, configure the VPP Agent:

    docker exec -it vpp agentctl -
  3. Check the configuration, using agentctl or the VPP console:

    docker exec -it vpp agentctl -e 172.17.0.1:2379 show
    docker exec -it vpp vppctl -s localhost:500

For a detailed rundown of the Quickstart, please refer to the Quickstart section of VPP Agents Github.

We have shown you how to integrate and quickstart the  VPP Agent, on top of Ligato.

Our next post will highlight gRPC/REST – until then, enjoy playing around with VPP Agent.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

Vector Packet Processing 102: Honeycomb & hc2vpp

Welcome to the second part of our VPP Introduction series, where we will talk about details of the Honeycomb project. Please visit our previous post on VPP Plugins & Binary API, which is used in Honeycomb to manage the VPP agent.

What is Honeycomb?

Honeycomb is a generic, data plane management agent and provides a framework for building specialized agents. It exposes NETCONF, RESTCONF and BGP as northbound interfaces.

Honeycomb runs several, highly functional sets of APIs, based in ODL, which are used to program the VPP platform. It leverages ODL’s existing tools and integrates several of its existing components (YANG Tools, MD-SAL, NETCONF/RESTCONF…).  In other words – it is a light on resources, bare bone version of OpenDaylight.

Its translation layer and data processing pipelines are classified as generic, which makes it extensible and usable not only as a VPP specific agent.

Honeycomb’s functionality can be split into two main layers:

  • Data Processing layer – pipeline processing for data from Northbound interfaces, towards the Translation layer
  • Translation layer – handles mainly configuration updates from data processing layer + reads and writes configuration-data
  • Plugins – extend Honeycombs usability

Honeycomb mainly acts as a bridge between VPP and the actual OpenDaylight SDN Controller:


Examples of VPP x Honeycomb integrations

We’ve already showcased several use cases on our Pantheon Technologies’ YouTube channel:

For the purpose of integrating VPP with Honeycomb, we will further refer to the project hc2vpp, which was directly developed for VPP usage.

What is hc2vpp?

This VPP specific build is called hc2vpp, which provides an interface (somewhere between a GUI and a CLI) for VPP. It runs on the same host as the VPP instance and allows to manage it off-the-box. This project is lead by Pantheons own Michal Čmarada.

Honeycomb was created due to a need for configuring VPP via NETCONF/RESTCONF. During the time it was created, NETCONF/RESTCONF was provided by ODL. Therefore, Honeycomb is based on certain ODL tools (data-store, YANG Tools, others). ODL as such uses an enormous variety of tools. Honeycomb was created as a separate project, in order to create a smaller footprint. After its implementation, it exists as a separate server and starts these implementations from ODL.

Later on, it was decided that Honeycomb should be split into a core instance, and hc2vpp would handle VPP related parts. The split also occurred, in order to provide the possibility of creating a proprietary device control agent. hc2vpp (Honeycomb to VPP) is a configuration agent, so that configurations can be sent via NETCONF/RESTCONF. It translates the configuration to low level APIs (called Binary APIs).

Honeycomb and hc2vpp can be installed in the same way as VPP, by downloading the repositories from GitHub. You can either:

Install Honeycomb

Install hc2vpp

For more information, please refer to the hc2vpp official project site.

In the upcoming post, we will introduce you to the Ligato VPP Agent.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

Vector Packet Processing 101: VPP Plugins & Binary API

In the first part of our new series, we will be building our first VPP platform plug-in, using basic examples. We will start with a first-dive into plugin creation and finish with introducing VAPI into this configuration.

If you do not know what VPP is, please visit our introductory post regarding VPP and why you should consider using it.

Table of contents:

  • How to write a new VPP Plugin
    • 1. Preparing your new VPP plugin
    • 2. Building & running your new plugin
  • How to create new API messages
  • How to call the binary API
    • Additional C/C++ Examples

How to write a new VPP Plugin

The principle of VPP is, that you can plugin a new graph node, adapt it to your networks purposes and run it right off the bat. Including a new plugin does not mean, you need to change your core-code with each new addition. Plugins can be either included in the processing graph, or they can be built outside the source tree and become an individual component in your build.

Furthermore, this separation of plugins makes crashes a matter of a simple process restart, which does not require your whole build to be restarted because of one plugin failure.

1. Preparing your new VPP plugin

The easiest way how to create a new plugin that integrates with VPP, is to reuse the sample code at “src/examples/sample-plugin”. The sample code implements a trivial “macswap” algorithm that demonstrates the plugins run-time integration with the VPP graph hierarchy, API and CLI.

  • To create a new plugin based on the sample plugin, copy and rename the sample plugin directory

cp -r src/examples/sample-plugin/sample src/plugins/newplugin

#replace 'sample' with 'newplugin'. as always, take extra care with sed!
cd src/plugins/newplugin
fgrep -il "SAMPLE" * | xargs sed -i.bak 's/SAMPLE/NEWPLUGIN/g'
fgrep -il "sample" * | xargs sed -i.bak 's/sample/newplugin/g'
rm *.bak*
rename 's/sample/newplugin/g' *

There are the are following files:

    • node.c – implements functionality of this graph node (swap source and destination address) -update according to your requirements.
    • newplugin.api – defines plugin’s API, see below
    • newplugin.c, newplugin_test.c – implements plugin functionality, API handlers, etc.
  • Update CMakeLists.txt in newplugin directory to reflect your requirements:
add_vpp_plugin(newplugin
  SOURCES
  node.c
  newplugin.c

  MULTIARCH_SOURCES
  node.c

  API_FILES
  newplugin.api

  API_TEST_SOURCES
  newplugin_test.c

  COMPONENT vpp-plugin-newplugin
)
  • Update sample.c to hook your plugin into the VPP graph properly:
VNET_FEATURE_INIT (newplugin, static) = 
{
 .arc_name = "device-input",
 .node_name = "newplugin",
 .runs_before = VNET_FEATURES ("ethernet-input"),
};
  • Update newplugin.api to define your API requests/replies. For more details see “API message creation” below.
  • Update node.c to do required actions on input frames, such as handling incoming packets and more

2. Building & running your new plugin

  • Build vpp and your plugin. New plugins will be built and integrated automatically, based on the CMakeLists.txt
make rebuild
  • (Optional) Build & install vpp packages for your platform
make pkg-deb
cd build-root
sudo dpkg -i *.deb
  • The binary-api header files you can include later are located in build-root/build-vpp_debug-native/vpp/vpp-api/vapi
    •  If vpp is installed, they are located in /usr/include/vapi
  • Run vpp and check whether your plugin is loaded (newplugin has to be loaded and listed using the show plugin CLI command)
make run
...
load_one_plugin:189: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:189: Loaded plugin: newplugin_plugin.so (Sample VPP Plugin)
load_one_plugin:189: Loaded plugin: nsim_plugin.so (network delay simulator plugin)
...
DBGvpp# show plugins
...
 Plugin Version Description
 1. ioam_plugin.so 19.01-rc0~144-g0c2319f Inbound OAM
 ...
 x. newplugin_plugin.so 1.0 Sample VPP Plugin
 ...

How to create new API messages

API messages are defined in *.api files – see src/vnet/devices/af_packet.api, src/vnet/ip/ip.api, etc. These API files are used to generate corresponding message handlers. There are two types of API messages – non-blocking and blocking. These messages are used to communicate with the VPP Engine to configure and modify data path processing.

Non-blocking messages use one request and one reply message. Message replies can be auto-generated, or defined manually. Each request contains two mandatory fields – “client-index” and “context“, and each reply message contains mandatory fields – “context” and “retval“.

  • API message with auto-generated reply

autoreply define ip_table_add_del
{
 u32 client_index;
 u32 context;
 u32 table_id;
...
};
  • API message with manually defined reply
define ip_neighbor_add_del
{
 u32 client_index;
 u32 context;
 u32 sw_if_index;
...
};
define ip_neighbor_add_del_reply
{
 u32 context;
 i32 retval;
 u32 stats_index;
...
};

Blocking messages use one request and series of replies defined in *.api file. Each request contains two mandatory fields – “client-index” and “context“, and each reply message contains mandatory field – “context“.

  • Blocking message is defined using two structs – *-dump and *_details

define ip_fib_dump
{
 u32 client_index;
 u32 context;
...
};
define ip_fib_details
{
 u32 context;
...
};

Once you define a message in an API file, you have to define and implement the corresponding handlers for given request/reply message. These handlers are defined in one of component/plugin file and they use predefined naming – vl_api_…_t_handler – for each API message.

Here is an example for existing API messages (you can check it in src/vnet/ip component):

#define foreach_ip_api_msg \
_(IP_FIB_DUMP, ip_fib_dump) \
_(IP_NEIGHBOR_ADD_DEL, ip_neighbor_add_del) \
...
static void vl_api_ip_neighbor_add_del_t_handler (vl_api_ip_neighbor_add_del_t * mp, vlib_main_t * vm)
{
...
 REPLY_MACRO2 (VL_API_IP_NEIGHBOR_ADD_DEL_REPLY,
...
static void vl_api_ip_fib_dump_t_handler (vl_api_ip_fib_dump_t * mp)
{
...
 send_ip_fib_details (am, reg, fib_table, pfx, api_rpaths, mp->context);
...

Request and reply handlers are usually defined in api_format.c (or in plugin). Request uses a predefined naming – api_… for each API message and you have to also define help for each API message :

static int api_ip_neighbor_add_del (vat_main_t * vam)
{
...
  /* Construct the API message */
  M (IP_NEIGHBOR_ADD_DEL, mp);
  /* send it... */
  S (mp);
  /* Wait for a reply, return good/bad news */
  W (ret);
  return ret;
}
static int api_ip_fib_dump (vat_main_t * vam)
{
...
  M (IP_FIB_DUMP, mp);
  S (mp);
  /* Use a control ping for synchronization */
  MPING (CONTROL_PING, mp_ping);
  S (mp_ping);
  W (ret);
  return ret;
}
#define foreach_vpe_api_msg \
...
_(ip_neighbor_add_del, \
 "(<intfc> | sw_if_index <id>) dst <ip46-address> " \
 "[mac <mac-addr>] [vrf <vrf-id>] [is_static] [del]") \
...
_(ip_fib_dump, "") \
...

Replies can be auto-generated or manually defined.

  • auto-generated reply using define foreach_standard_reply_retval_handler, with predefined naming
  • manually defined reply with details

How to call the binary API

In order to call the binary API, we will introduce VAPI to our configuration.

VAPI is the high-level C/C++ binary API. Please refer to src/vpp-api/vapi/vapi_doc.md for details.

VAPI’s multitude of advantages include:

  • All headers in a single place – /usr/include/vapi => simplifies code generation
  • Hidden internals – one no longer has to care about message IDs, byte-order conversion
  • Easier binapi calls – passing user provided context between callbacks

We can use the following C++ code to call our new plugins’s binary API.

#include <cstdlib>
#include <iostream>
#include <cassert>

//necessary includes & macros
#include <vapi/vapi.hpp>
#include <vapi/vpe.api.vapi.hpp>
DEFINE_VAPI_MSG_IDS_VPE_API_JSON

//include the desired modules / plugins
#include <vapi/newplugin.api.vapi.hpp>
DEFINE_VAPI_MSG_IDS_NEWPLUGIN_API_JSON

using namespace vapi;
using namespace std;

//parameters for connecting
static const char *app_name = "test_client";
static const char *api_prefix = nullptr;
static const int max_outstanding_requests = 32;
static const int response_queue_size = 32;

#define WAIT_FOR_RESPONSE(param, ret)      \
  do                                       \
    {                                      \
      ret = con.wait_for_response (param); \
    }                                      \
  while (ret == VAPI_EAGAIN)

//global connection object
Connection con;

void die(int exit_code)
{
    //disconnect & cleanup
    vapi_error_e rv = con.disconnect();
    if (VAPI_OK != rv) {
        fprintf(stderr, "error: (rc:%d)", rv);
    }

    exit(exit_code);
}

int main()
{
    //connect to VPP
    vapi_error_e rv = con.connect(app_name, api_prefix, max_outstanding_requests, response_queue_size);

    if (VAPI_OK != rv) {
        cerr << "error: connecting to vlib";
        return rv;
    }

    try
    {
        Newplugin_macswap_enable_disable cl(con);

        auto &mp = cl.get_request().get_payload();

        mp.enable_disable = true;
        mp.sw_if_index = 5;

        auto rv = cl.execute ();
        if (VAPI_OK != rv) {
            throw exception{};
        }

        WAIT_FOR_RESPONSE (cl, rv);
        if (VAPI_OK != rv) {
            throw exception{};
        }

        //verify the reply
        auto &rp = cl.get_response ().get_payload ();
        if (rp.retval != 0) {
            throw exception{};
        }
    }
    catch (...)
    {
        cerr << "Newplugin_macswap_enable_disable ERROR" << endl;
        die(1);
    }

    die(0);
}

Additional C/C++ Examples

Furthermore, you are encouraged to try the minimal VAPI example provided in vapi_minimal.zip. This example creates a loopback interface, assigns it an IPv4 address and then prints the address.
Follow these steps:

  • Install VPP
  • Extract the archive, build & run examples
unzip vapi_minimal.zip
mkdir build; cd build
cmake ..
make

#c example
sudo ./vapi_minimal
#c++ example
sudo ./vapi_minimal_cpp

In conclusion, we have:

  • successfully built and ran our first VPP plugin
  • created and called an API message in VPP

Our next post will introduce and highlight the key reasons, why you should consider Honeycomb/hc2vpp in your VPP build.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub. Follow us on Twitter.

Watch our YouTube Channel.

PANTHEON.tech presents: Vector Packet Processing (VPP) Guide

Welcome to our new series on how to build and program FD.io‘s Vector Packet Processing framework, also known as VPP.

The name stems from VPP’s usage of vector processing, which can process multiple packets at a time with low latency. Single packet processing and high latency were a common occurrence in the older, scalar processing approach, which VPP aims to make obsolete.

What will this series include?

This five-part series will include the following features, with the ultimate goal on getting to know your VPP framework and adapting it to your network:

  1. Binary API
  2. Honeycomb/hc2vpp
  3. Ligato VPP Agent (ligato/vpp-agent at Github)
  4. gRPC/REST
  5. gNMI

Why should I start using Vector Package Processing?

The main advantages are:

  • high performance with a proven technology
  • production level quality
  • flexible and extensible

The principle of VPP is, that you can plugin a new graph node, adapt it to your networks purposes and run it right off the bat. Including a new plugin does not mean, you need to change your core-code with each new addition. Plugins can be either included in the processing graph, or they can be built outside the source tree and become an individual component in your build.

Furthermore, this separation of plugins makes crashes a matter of a simple process restart, which does not require your whole build to be restarted because of one plugin failure.

For a full list of features, please visit the official Vector Package Processing Wiki.You can also check our previous installments on VPP integration.

Preparation of VPP packages

In order to build and start with VPP yourself, you will have to:

  1. Download VPP’s repository from this page or follow the installation instructions
  2. Clone the repository inside your system, or from VPP’s GitHub

Enjoy and explore the repository as you wish. We will continue exploring the Binary API in the next part of our series.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

PANTHEON.tech @ 2020 Vision Executive Summit in Lisbon

I was sent to Lisbon by PANTHEON.tech, in order to attend the annual 2020 Vision Executive Summit. Here are my experiences and insights into this event.

The 2020 Vision Executive Summit, presented by Light Reading, was focused mainly on the pending revolution of 5G networks, automation, Edge Computing, IoT, security, etc. It hosts a variety of vendors and service providers, who provide insights into the telecom industry’s current trends and emerging topics.

Themes of the summit

In case of 5G, we have seen a huge opportunity in discussing PANTHEON.tech’s future involvement and plans in this revolution. The challenges surrounding 5G were discussed by industry leaders with hands-on experience. This was beneficial, since we were confronted with the harsh reality of 5G. Due to it being a technology in-progress, many questions are still open. What will be the use-cases? What should we prepare for and when?

Nobody really knows how it may turn out, when it will become widely available for consumers, or if the world is even prepared for it. But it was a great opportunity to meet the people, whose bread and butter consists of building the future 5G network. It was an invaluable experience, to see a realistic view from industry-insiders and their perception of the future. It was a collective of equally-minded individuals and companies in the fields relevant to PANTHEON.tech’s vision.

Another heavily discussed topic was security. While it is no secret that technology is successfully becoming an important part of our lives. Companies have to heavily rely on a defense against potential security threats and cyber attacks. Panels were held regarding the importance of security measures in expanding networks and the need for flexible and agile security solutions.

Subsequently, Edge Computing, which brings the distribution of data closer to the consumer, was also mentioned and discussed, in regards to its vulnerabilities and future. In this case, it was said with certainty that if you are the type of parent that plans your child’s future for them, make them study cyber security. The investment will return sooner than you could imagine.

Our experience at the summit

Our vision in attending this summit was to find out, if this summit is the right fit for us (spoiler alert – it was) check on the newest trends in our field and in which direction are they developing. The discussions were open and involved the real thoughts and views, without the PR and marketing stuff.

Lisbon is an interesting city, since it is more hidden from the eye of a classic tourist. It reminded me, in a way, of San Francisco. This was mainly due to trams riding uphill and the many uphill roads one has to take, in order to get somewhere. It was surprising though, that the city is making it a point to keep the original architecture of the city in tact and without major reconstructions.

As for the venue itself, the Intercontinental Hotel in Lisbon was nothing short of wonderful. Another highlight was the gala dinner. It was the perfect opportunity for casual networking, in the pompous and spectacular setting of Palacio de Xebregas. I have also experienced my first tuk-tuk ride, where I had to consider whether my life was worth the visit.

In conclusion – it was. I am looking forward to the new business-partners and connections PANTHEON.tech has made at the 2020 Vision Executive Summit.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

PANTHEON.tech @ Huawei Connect 2018 in Shanghai

PANTHEON.tech visited Shanghai last week to attend the third annual Huawei Connect. Martin Varga shares some insights from the event.

Activate Intelligence

This year’s theme was Activate Intelligence. Huawei outlined its broad strategy to bring artificial intelligence (AI) to the masses in applications in manufacturing, autonomous driving, smart cities, IoT and other areas. AI will enable billions of new devices to be connected and transfer big data over the network.

The conference was held in Shanghai, China, at the Shanghai World Expo Exhibition Center. Huawei has put a lot of resources and effort into organizing the event which has shown its direct impact on over 26,000 in the attendance. The conference was organized perfectly, to the last detail (exhibition areas, keynotes and conference area, chill-out zones, etc.).

We have witnessed the demonstrations of various smart technologies, ranging from smart city applications, smart education, smart transportation. Smart everything.

One of the most impressive technology demonstrations was an AI that was able to translate Chinese to English and vice versa, as good as a human translator could. Microsoft in cooperation with Huawei states:

“Internal tests have shown, depending on the language, up to a 23 percent better offline translation quality over competing best-in-class offline packs.”

Huawei is also building an AI ecosystem of partners, that is targeted to exceed 1 million developers over the next three years with US$140m. 

Networking

We have had some interesting meetings with Huawei’s representatives. It was very pleasant to learn about Huawei’s visions for the near future. We are also glad to share the same vision for an exciting future. Huawei invests heavily into researching new technologies such as AI or IoT, in order to define practical use-cases that can be deployed into their product’s portfolio.

PANTHEON.tech, as a software development company, is strongly focused on computer networking which is related to the Huawei’s vision to integrate AI into managing network operations.

Mr. Yang Jin  Director, Network Data Analytics Research  Huawei Technologies Co., stated: 

“Artificial Intelligence and Machine Learning will abstract data to make next-generation communication breakthroughs come to life.”

Feel free to contact PANTHEON.tech if you have interest in any of the AI, AR/VR, IoT, Intent Driven Network, SDN, NFV, Big Data, and related areas. We can talk about challenges and how we can solve them together.

Martin Varga

Technical Business Development Manager

FRINX UniConfig is now powered by PANTHEON.tech’s lighty.io

What is lighty.io?

lighty.io is an SDK that provides components for the development of SDN controllers and applications based on well-established standards in the networking industry. It takes advantage of PANTHEON.tech’s extensive experience from the involvement in the OpenDaylight platform and simplifies and speeds up the development, integration, and delivery of SDN solutions.

lighty.io also enables SDN programmers to use ODL services in a plain JavaSE environment. lighty.io enables a major OpenDaylight distribution vendor to build and deploy their applications faster.

FRINX UniConfig

FRINX UniConfig provides a common network API across physical and virtual devices from different vendors. It leverages an open source device library, which offers connectivity to a multitude of networking devices and VNFs.

The API provides the ability to store intent and operational data from services and devices, enables to commit intent to the network, syncs from the network so that the latest device state is reflected in the controller, compares intended state and operational state and provides device and network wide transactions. All changes are applied in a way that only those parts of the configuration that have changed are updated on the devices.

The UniConfig framework consists of distinct layers, where each layer provides a higher level of abstraction. APIs of the lowest layer provides the ability to send and receive unstructured data to and from devices. The unified layer provides translation capabilities to and from OpenConfig. The UniConfig layer provides access to the intent and the actual state of each device plus the capability to perform transactions and rollback of configurations.

NETCONF devices can be configured via their native YANG models or via OpenConfig. Finally, FRINX UniConfig also provides service modules based on IETF YANG models for the configuration of L2VPNs, L3VPNs and enables the collection of LLDP topology information in heterogeneous networks.

The UniConfig Framework is based on open source projects like OpenDaylight and Honeycomb. It publishes all translation units under the Apache v2 license. Customers and integration partners can freely contribute, modify and create additional device models, which work with the UniConfig Framework.

How did PANTHEON’s lighty.io help?

PANTHEON.tech’s lighty.io helped to make UniConfig run and build faster.

Porting UniConfig to lighty.io required no changes to the application code and has brought many measurable improvements. UniConfig now starts faster, has a smaller memory footprint, and most importantly – significantly reduces build time.

lighty.io packs many features, some of which are:

  • Client libraries for communication with ODL back end for Java, Python, and Golang
  • Enhanced NETCONF device simulator
  • Microservice friendly structure
  • Easy to use utilities for YANG model data serialization and deserialization
  • Example applications for integration with vertx.iospring.io and others which enable your productivity
  • Inclusive of maintained examples and guides so the newcomers can start working immediately and be efficient

About FRINX  

FRINX offers solutions and services for open source network control and automation. The team is made up of passionate developers and industry professionals who want to change the way networking software is created, deployed and operated. FRINX offers network automation products and distributions of OpenDaylight and FD.io in conjunction with support services. They are proud to count service providers and enterprise companies from the Fortune Global 500 list among its customers.

About PANTHEON.tech 

PANTHEON.tech is a software research & development company focused on network technologies and prototype software. Yet, we do not perceive networks as endless cables behind switches and routers. For us, it is all software-defined. Clean and neat. Able to dynamically expand and adapt according to the customer’s needs.

We thrive in a world of network functions virtualization and arising need for orchestration. Focusing on SDN, NFV, Automotive and Smart Cities. Experts in OpenDaylight, FD.IO VPP, PNDA, Sysrepo, Honeycomb, Ligato and much more.

 

lighty.io powers datacenter management at kaloom.com


Complete automation and full forwarding plane programmability

Private data centers are the hot topic for companies and enterprises who are not willing to push all the data into public clouds. Kaloom Software Defined Fabric™ (Kaloom SDF) is the world’s first fully programmable, automated, software-based data center fabric capable of running VNFs efficiently at scale. This is the first data center networking fabric on the market that provides complete automation and full forwarding plane programmability.

 approached PANTHEON.tech last year, knowing Pantheon’s intensive and long involvement in SDN, particularly iOpenDaylight project. OpenDaylight (ODL) is a modular open platform for orchestrating and automating networks of any size and scale. The OpenDaylight platform arose out of the SDN movement, in which PANTHEON.tech has expertise and experience. Hence, it was a logical step to utilize this expertise in this project and leverage what has already been done.

Traditional ODL based controller design was not suitable for this job because of bulkiness of the Karaf based deployments. Kaloom requested a modern web UI which vanilla ODL platform does not provide. lighty.io as a component library provides an opportunity to run ODL services such as: MD-SAL, NETCONF and YANG Tools in any modern web server stack, and integration with other components like MongoDB

Architecture

The following architecture is starting to be like a blueprint for SDN applications today. We utilize the best of both worlds:

  1. MD-SAL, NETCONF and YANG Tools from ODL
  2. Updated modern web stack Jetty/Jersey and
  3. MongoDB as a persistent data store.

 

This is how Kaloom Fabric Manager (KFM) project has started. After several months of  customizing development, we have deployed a tailored web application which provides management UI for Kaloom SDF. We have changed and tailored our Visibility Package application to suit Kaloom’s requirements and specifics. This specialized version uses the name of KFM. The architecture diagram above shows details/internals of the KFM and how we interconnect with Kaloom’s proprietary Fabric Manager/Virtual Fabric Manager controller devices.

The solution for physical data centers

lighty.io based back-end of the KFM with NETCONF plugin provides REST services to the Angular UI, which is using our Network Topology Visualization Component for the better topology view visualization and user experience. Using these REST endpoints, it is easy to send specific NETCONF RPC to the Kaloom SDF controllers.

While working on this next-gen Data Center Infrastructure Management software, we have realized that integrating all moving parts of the system is a crucial step for final delivery. Since different teams were working on different parts, it was crucial we could isolate the lighty.io part of the system and adapt it to the Kaloom SDF as much as possible. We have used our field-tested NETCONF device simulator from our lighty.io package to deliver the software which was tested thoroughly to provide stability of the KFM UI.

Kaloom SDF provides a solution for physical data centers administrated by Data Center Infrastructure Provider (DCIP) users. A physical data center can be easily sliced to virtual data centers offered to customers, called virtual Data Center Operator (vDCO) users. The DCIP user can monitor and configure the physical fabrics – PODs of the data center. KFM WEB UI shows the fabrics in topology view and allows updating the attributes of fabric and fabric nodes.

Topology View of Fabric Manager

Advantages for DCIP

The main task of DCIP user is to slice the fabrics to virtual data centers and virtual fabrics. This process involves choosing servers through associated termination points and associating them with the newly created virtual fabric manager controller. Server resources are used through the virtual fabric manager by vDCO users.

vDCO users can use the server resources and connect them via network management of their virtual data center. vDCO can attach server ports to the switches with proper encapsulation settings. After the switch is ready, vDCO can create a router and attach switches to it. The router offers different configuration possibilities to follow vDCO user’s needs: L3 interface configuration, static routing, BGP routing, VXLANs and many more. KFM offers also topology view of virtual data center network, so you can check relations between servers, switches, and routers.

Topology View of Fabric Manager

For more details about the KFM UI in action, please see the demo video with NETCONF simulator of Kaloom SDF bellow, or visit kaloom or kaloom academy

 

 

lighty.io runs 5G on xRAN

In April 2018, the xRAN forum released the Open Fronthaul Interface Specification. The first specification made publicly available from xRAN since its launch in October 2016. The released specification has allowed a wide range of vendors to develop innovative, best-of-breed remote radio unit/head (RRU/RRH) for a wide range of deployment scenarios, which can be easily integrated with virtualized infrastructure & management systems using standardized data models.

This is where PANTHEON.tech came to the scene. We became one of the first companies to introduce full stack 5G compliant solution with this specification.

Just a few days spent coding and utilizing the readily available lighty.io components, we created a Radio Unit (RU) simulator and an SDN controller to manage a group of Radio Units.

Now, let us inspect the architecture and elaborate on some important details.

We have used lighty.io, specifically the generic NETCONF simulator, to set up an xRAN Radio Unit (RU) simulator. xRAN specifies YANG models for 5G Radio Units. lighty.io NETCONF device library is used as a base which made it easy to add custom behavior and 5G RU is ready to stream data to a 5G controller.

The code in the controller pushes the data collected from RUs into Elasticsearch for further analysis. RU device emits the notifications of simulated Antenna Line Devices connected to RU containing:

  • Measured Rx and Tx input power in mW
  • Tx Bias Current in mA (Internally measured)
  • Transceiver supply voltage in mV (Internally measured)
  • Optional laser temperature in degrees Celsius. (Internally measured)

*We used device xRAN-performance-management model for this purpose.

lighty.io as a 5G controller

With lighty.io we created an OpenDaylight based SDN controller that can connect to RU simulators using NETCONF. Once RU device is connected, telemetry data is pushed via NETCONF notifications to the controller, and then directly into Elasticsearch.
Usually, log stash is required to upload data into Elasticsearch. In this case, it is the 5G controller that is pushing device data directly to Elasticsearch using time series indexing.
On Radio Unit device connect event, monitoring process automatically starts. RPC-ald-communication is called on RU device collecting statistics for:

  • The Number of frames with incorrect CRC (FCS) received from ALD – running counter
  • The Number of frames without stop flag received from ALD – running counter
  • The number of octets received from HDLC bus – running counter

*We used xran-ald.yang model for this purpose.
The lighty.io 5G controller is also listening to notifications from the RU device mentioned above.

Elasticsearch and Kibana

Data collected by the lighty.io 5G controller via RPC calls and notifications are pushed directly into Elasticsearch indices. Once indexed, Elasticsearch provides a wide variety of queries upon stored data.
Typically, we can display several faulty frames received from “Antenna Line Devices” over time, or analyze operational parameters of Radio Unit devices like receiving and transmitting input power.
Such data are precious for Radio Unit setup, so the control plane feedback loop is possible.

By adding Elasticsearch into the loop, data analytics or the feedback loop became ready to perform complex tasks. Such as: Faulty frame statistics from the “Antenna Line Devices” or the  Radio Unit operational setup

How do we see the future of xRAN with lighty.io?

The benefit of this solution is a full stack xRAN test. YANG models and its specifications are obviously not enough considering the size of the project. With lighty.io 5G xRAN, we invite the Radio Unit device vendors and 5G network providers to cooperate and build upon this solution. Having the Radio Unit simulators available and ready allows for quick development cycle without being blocked by the RU vendor’s bugs.

lighty.io has been used as a 5G rapid application development platform which enables quick xRAN Radio Unit monitoring system setup.
We can easily obtain xRAN Radio Unit certification against ‘lighty.io 5G controller’ and provide RU simulations for the management plane.

Visit lighty.io page, and check out our GitHub for more details.

lighty.io in Data Center Management

The advantages of deploying lighty.io in Data Center Infrastructure Management (DCIM)

The DCIM market is continuing to evolve and large enterprises continue to be the primary adopters of new DCIM software solutions. The goal of a DCIM software initiative is to provide administrators the ability to identify, locate, visualize, and manage all physical data center assets with a holistic view.

PANTHEON.tech has developed lighty.io based on OpenDaylight in Java SE. It is a great software for implementation of customized DCIM solutions such as SDN controller, NFV orchestrator or VNF management etc.

Some of the great features, you will benefit from while managing your data center are listed below.

lighty.io scheme and use-case description

Model-driven approach

lighty.io implements a model-driven approach to data center infrastructure management. Because of the common models being used, intercommunication of configurational, operational, monitoring and telemetry data in all the parts of the systems becomes possible which are based on lighty.io.

These models define structure, syntax, and semantics of the data processed by each part of the system. Usage of standardized models by vendors (e.g., models from OpenConfig or IETF) leads to seamless migration from one vendor to another.

Scalability and controller hierarchy

  • Horizontal scalability – lighty.io supports clustering. A feature, which allows horizontal scaling of the system by adding more instances (nodes) of the controller into a cluster
  • Controller hierarchy – NB plugins of lighty.io allow the implementation of upper layer applications running as micro services and performing operations using the controller’s NB plugin API. It is also possible to design a hierarchy of controllers where the upper layer controller(s) performs operations using the lower layer controller’s NB plugins. One of the implemented NB plugins is a plugin that implements the NETCONF protocol. Using this NB plugin in the hierarchy of controllers makes possible to manage the lower layer controllers as NETCONF devices.

Security

lighty.io is implemented in Java, which is in nature a Type-Safe programming language. Type safety leads to more secure software than other software written e.g., in C/C++, while reaching a good performance. The model-driven approach and the source code generation also support software security.

These features minimize the possibility of error in the code by implementing the requirement of the verification of the input data from external applications and connected devices. Cyphering, authorization, and usage of these certificates are the matter of course.

Legacy and heterogeneous systems support

lighty.io implements the main SDN standards e.g., NETCONF, RESTCONF, YANG. Moreover, the legacy technologies that are already implemented in lighty.io makes SNMP southbound plugin possible. This shows that the capability of lighty.io being used not only in green-field deployments (implementing the system from scratch) but also brown-field deployments where it is needed to manage a heterogeneous set of networking devices.

Extensibility

As a software design principle, the model-driven approach speeds up and simplifies implementation of extensions with the architecture of lighty.io results in great extensibility. The architecture of the lighty.io defines Northbound – NB and Southbound – SB plugins implementations as a model-driven module.

NB & SB Plugins

NB plugins enable the communication of the controller with the upper layer applications. Such as dashboards, upper layer controllers, interDC orchestrators etc. The upper layer applications can be implemented as an external service or as a native module of the controller.

The upper layer applications mostly implement application logic, business logic, administration interfaces, data analytics, data transformation etc. NB plugins can be used to:

  • submit commands to the SDN controller,
  • send notifications to upper layers by the controller,
  • send telemetry data to upper layers by the controller,
  • monitor the controller by upper layers,
  • read the operational data of the controller and devices orchestrated by the controller,
  • the configuration of the controller itself or specific device orchestrated by the
    controller.

SB plugins implement protocols and technologies extending the SDN controller capabilities with new standards and technologies allowing connections of new network devices. SB plugins can be used for:

  • the configuration of networking devices,
  • fetching operational (state) data of the networking devices,
  • receiving telemetry data,
  • monitoring of devices,
  • submitting commands to the devices,
  • receiving notifications from devices.

Models and model-driven approach simplify the implementation of new plugins and upper layer applications because the usage of these models allows source code generation of classes (OOP construct) and related code which verifies the syntax and semantics of the data minimizes the probability of errors in implementation caused by human interactions.

If you would like to know more about lighty.io and how it could improve your business, visit lighty.io or our Product Page.

lighty.io UI: Network Topology Visualization Component

lighty.io UI: Network Topology Visualization Component

Pantheon.tech had developed a network topology visualization component to be used to develop a responsive and scalable front-end network topology visualization application on top of the lighty.io. The topology visualization component enables you to visualize any topology on any device with a web browser. It will also be included within the lighty.io distribution package.

We as a successful software development company were compelled to create our own solution based on the technologies we know and like to use as the other existing commercial applications fail to cover the visualization of the network topology sufficiently.

The experience of the development of Visibility Package, which is a software component,used to gather and visualize network topology data from different networks, network management systems,and cloud orchestrators, led Pantheon developers to create abetter solution.Using this the network topology visualization component will significantly reduce your time spent for the development.

We have developed the topology visualization component as an Angular component, which can be used in Angular applications to create network visualization applications. Thanks to its modularity, customizability the network visualization component can visualize any network from small company networks to large-scale data centers with thousands of nodes and links.

Picture(1): A screenshot of a spine leaf network visualization sample.

 

As every use case’s demands, requirements, and scale widely differ from each other, a scalable and universal component was needed. That is why we have based the topology visualization component on the Angular framework, which allows rapid development of responsive, modular and scalable applications.

Our previous experiences showed us that SVG technology for topology visualization is not performing well with very large network topologies. That is why we decided to use HTML5 Canvas instead. Canvas provides seamless animations and has great responsiveness even with thousands of nodes and links.

 

Some of the great features of the topology visualization component are

 

  • Ease of use

The topology visualization component includes extensive documentation and examples to help the developer while application creation. With Angular CLI, a basic application can be set up in minutes.

  • Customizability

The basic application could easily be customized to a desired state. We have developed the topology visualization component with customization in mind.

  • Modularity

The topology visualization component is developed as separate modules. The developer can decide and use which modules are needed for a particular project and add other modules whenever they are required.

  • Speed and Responsiveness

Angular and HTML5 Canvas are used to ensure even with large amounts of data the application will be running effortlessly.

  • Scalability

The topology visualization component works with small network topology with few nodes and links but truly shines with large-scale topologies. We are continually adding new features based on our client’s requests and needs. Watch this space out for many exciting features to be announced in the near future.

How lighty.io can speed up the 5G connectivity deployment!

lighty.io is a Software Development Kit (SDK) which provides components for the development of Software Defined Networking (SDN) controllers, based on commonly used standards in the networking industry. We have used our experience from the OpenDaylight (ODL) to create lighty.io, which will empower you to simply develop, integrate and deploy a tailored SDN controller.

An SDN controller plays an essential role as an orchestrator of networking infrastructure in 5G networks. It is used not only for the configuring and monitoring of the physical routers and switches, but also for managing virtual networks of Virtual Machines (VMs) and containers. Among many great benefits of an SDN controller (or set of interconnected SDN controllers) is that it has a holistic view of the network. An SDN controller is also used for connecting User Equipment (UE) or Customer Premise Equipment (CPE) to data centers and enables technologies such as network slicing and edge computing to be used in the 5G.

Network slicing requires the ability of configuration and monitoring of all networking devices (physical or virtual) along the path of the traffic. For edge computing purposes, it is necessary to automate the configuration of the devices in order to support 5G scenarios such as UE registration. The SDN controller enables technologies such as network slicing and edge computing to be used in 5G.

Figure 1: Overview of a 5G network architecture

 

Figure 1 (above) shows how the SDN controller based on lighty.io uses southbound plugins to read and write configuration and state of networking devices of WAN network and physical or virtual networks in data centers both core and at the edge.

lighty.io supports many south-bound protocols for network orchestration, such as NETCONF and RESTCONF protocol plugins.  The number of vendors and devices supporting these protocols grow every year. We believe that many devices and appliances in Radio, Edge, and WAN will speak these protocols in the 5G era. lighty.io also contains Pantheon’s SNMP SB plugin for integration with legacy systems, and for heterogeneous environments where the old and the new mix.

The modular architecture of lighty.io allows adding new plugin implementations to other protocols. lighty.io exposes the configurational and operational data of all the devices to an upper layer where a business logic of administration and automation applications can be implemented. The APIs can also be accessed remotely via the REST API and other communication methods can also be implemented as northbound plugins. These upper layer applications can be designed as micro services or as a part of the SDN controller.

 

Figure 2: An example of a 5G network using FD.io data plane

As mentioned above, it is necessary to use an SDN controller also for orchestration of virtualized networks in data centers. An open source project FD.io is one particular example of using such technology. FD.io implements configurable data plane running in user space level, not in kernel space level. Thanks to this feature, the FD.io data plane can be deployed as an ordinary micro service e.g., as a container. FD.io can be used for interconnection of containers or VMs in data centers and it is possible to orchestrate all of the instances of FD.io by lighty.io based SDN controller.

Figure 3: An example of a 5G network and integration with other IoT networks

Among connecting mobile phones and tablets to the network, 5G will also enable a vast number of Internet of Things (IoT) devices to be connected to the internet and to communicate directly with each other. IoT solutions can leverage SDN controllers for similar purposes as other 5G technologies do. Specific VNFs for IoT can be deployed and orchestrated by an SDN controller, whether that be at the edge or in the core data centers. Network slicing could be used for smart cars and smart cities solutions as it is shown in Figure 3(above)

This way the 5G networks will enable adoption of IoT in everyday human life. The number of IoT devices expected to connect to internet in upcoming years is substantial. According to Gartner’s predictions, IoT technology will be in 95 percent of electronics by 2020 [1]. According to another forecast from Cisco, 50 billion devices would connect to the internet by 2020 [2].

Here is a brief summary of features and benefits provided by lighty.io:

  • The modular architecture of southbound plugins allows implementation of communication with physical and virtualized networking devices.
  • Configurational and operational data of all orchestrated devices is exposed as a northbound plugin for administration, automation and analytics purposes.
  • MD-SAL (Model Driven Software Abstraction Layer) – provides data store and services to be used by other parts of SDN controller such as southbound and northbound plugins. The data processed by MD-SAL are modeled in YANG modeling
  • NETCONF and RESTCONF southbound plugins are available and field-tested.
  • SNMP plugin for integration with legacy systems is also available.
  • NETCONF protocol can be used by lighty.io for orchestration of FD.io data plane to interconnect VMs or cloud-native applications in data centers.
  • lighty.io has a light weight hardware footprint, hence responds promptly.
  • lighty.io is ready for micro service environment.
  • lighty.io provides faster and cheaper testing and CI.
  • lighty.io is an easy tool to develop and deploy SDN in 5G networking infrastructures.

Ready to test how lighty.io works? Send us an email at sales@lighty.io and we will provide you with a trial version.

 

Resources:

[1] https://www.gartner.com/smarterwithgartner/gartner-top-strategic-predictions-for-2018-and-beyond/

[2] https://www.cisco.com/c/dam/en_us/about/ac79/docs/innov/IoT_IBSG_0411FINAL.pdf

 

PANTHEONtech at Open Networking Summit (ONS) 2018

PANTHEON.tech had a unique opportunity to participate on Open Networking Summit (ONS) 2018 this year. Central topic of the ONS 2018 was data center solutions: ONAP and Kubernetes based systems. Also few new projects under the wings of Linux Foundation were introduced. For example “Acumos AI“, “Arkaino Edge stack” and DANOS (Disaggregated Network Operating System project) which is the operating system for white-box switches.

PANTHEON.tech has traditionally participated on the OpenDaylight (ODL) as well as the fd.io development and we launched our lighty.io product in the ONS. lighty.io changes conventional OpenDaylight attitude on how to build SDN controller applications, making them smaller, nimble and micro-service ready.

lighty.io caught attention of the ODL community members as well as customers struggling with real-life ODL deployments. This solution helps to consume and deploy ODL services faster with lower cost of ownership. Faster builds, quick test runs and smaller distribution sizes are right way to proceed. lighty.io brings also added value into the ONAP eco-system providing runtime for ONAP’s SDN-C link to sdn-c blog/article. We are continuously updating the community with lighty.io use-case examples and also lighty.io video use-cases

 

One of the projects, in which we participate in the community, is The Fast Data Project (FD.io). For the FD.io community, we presented Ligato; Honeycomb’s younger brother. It is an ’easy to learn and easy to use’ integration platform. We love to see, that the FD.io community is growing larger, not only in the number of contributors, but in the number of projects and use-cases as well. We were also pleased to accept an invitation to an introduction of a new FD.io project “Dual Modes, Multi-Protocols, Multi-Instances” (DMM), where we discussed use-cases and integration paths from the current networking stack. FD.io community has a potential of further growth, especially as we see the shift of the networking industry from a closed-sourced hardware-based network functions to an open-source software-based solutions.

ONS 2018 was an exciting opportunity for us. It was a forum where we could easily share our knowledge and provide a much needed innovation. Let’s see how artificial intelligence and machine learning will change the landscape of networking in upcoming years. See you on next ONS event!