Posts

[Release] VPPTop

Here at PANTHEON.tech, we really enjoy perfecting the user experience around VPP and creating projects based around this framework.

That is why we have developed a handy tool, which lets you analyze several crucial data points while using VPP.

We call it VPPTop.

Requirements

In order to run VPPTop, you will need to install these dependencies first:

  1. [Go] version 1.11+
  2. [VPP], version 19.04-3~g1cb333cdf~b41 is recommended. You can find the installation guide here.
  3. Configure VPP, according to the README file

How to install & run

1. Follow the above requirements, in order to have all dependencies for this software installed

2. Make sure VPP is running beforehand

3. Install VPPTop via Terminal to $GOPATH/bin:

go get -u github.com/PantheonTechnologies/vpptop

4. Run the tool from Terminal:

sudo -E vpptop

What is VPPTop?

VPPTop is an application for sysadmins and testers, implemented in Go. The goal of VPPTop is to provide VPP statistics which are updated in real-time, in one terminal window.

Before the release of our tool, admins could only rely on the usage of a CLI, to first request the data flow and then receive information, which are not up-to-date, nor in real-time.

Now, you can use our VPPTop tool, to find up-to-date information on your framework!

Version 1.11 supports statistics for:

  • Interfaces
  • Nodes
  • Errors
  • Memory Usage per Thread
  • Per Thread-Data

Version 1.11 supports these functionalities:

  • Clearing counters
  • Filter by Name
  • Sorting

Why should you start to use Vector Package Processing?

The main advantages are:

  • high performance with a proven technology
  • production level quality
  • flexible and extensible

The main advantage of VPP is, that you can plug in a new graph node, adapt it to your networks purposes and run it right away. Including a new plugin does not mean, you need to change your core-code with each new addition. Plugins can be either included in the processing graph, or they can be built outside the source tree and become an individual component in your build.

Furthermore, this separation of plugins makes crashes a matter of a simple process restart, which does not require your whole build to be restarted because of one plugin failure.

You can read more in our PANTHEON.tech series on VPP.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

[How-To] Use AWX and Ansible for Automation & SFC in a multi-tenant Edge & DC environment

Ansible is a powerful automation tool, widely used by administrators for the automation of configuration and management of servers.

It started being adopted by network engineers as a tool for automation of configuration and management of networking devices (bare metal or virtual). Steps of the specific automated process are implemented as tasks, defined in YAML format. Tasks can be grouped into roles with other tasks and are executed as so-called plays, defined in another YAML file called playbook.

Playbooks also associate definitions of variables and list of hosts, where the roles and tasks will be performed. All of the mentioned (tasks, roles, playbooks, variables, hosts) can be specified as YAML files with a well-defined directory structure.

This makes Ansible suitable to be used as a base of Infrastructure as Code (IaC) approach to configuration management and orchestration in the telco industry, data centers (orchestration of virtualization and networking) & many more.

Figure A: An Ansible Playbook example structure

Ansible playbook structure

AWX provides UI and REST API on top of Ansible and also adds the management of inventories (lists of hosts and groups of hosts), credentials (SSH & more) integration with CVS (Code Versioning Systems, e.g.: Git).

In AWX, you can define job templates, which associates playbooks with repositories, credentials, configuration, etc. and you can execute them. The job templates can be used to build graphs, defining the order of execution of multiple job templates (sequentially or in parallel). Such graphs are called workflows and they can use job templates & other workflows as graph nodes.

Figure B: AWX WorkFlow and Job Templates

AWX WorkFlow and Job Templates

In our previous blogs, we have demonstrated the usage of SDN controller based on lighty.io as a controller of OVSDB capable devices and OpenFlow switches.

In this article, we’re going to describe some example Ansible playbooks and Ansible modules, using an SDN controller based on lighty.io. We will also be orchestrating Open vSwitch instances as a networking infrastructure of core data centers and edge data centers. This way, we are able to build solutions, utilizing Service Functions (SFs) and Service Function Chaining (SFC), which are dedicated to a specific tenant or particular service.

 

Figure 1: Data Center Simulation Setup

Data Center Simulation Setup

In the above Figure 1, we can see a topology of the testing setup, where 3 virtual machines are running in a Virtual Box and connected into an L2 switch, representing the underlay network.

Each VM has an IP address, configured from the same subnet. The two VMs simulate data centers or servers respectively. In each DC VM, there is an Open vSwitch installed and running without any configuration (using the default configuration, which will be overridden by our SDN controller), together with ovs-vsctl and ovs-ofctl utilities.

In the management VM, Ansible and AWX are installed & running. Also, an SDN controller based on lighty.io. using OVSDB and OpenFlow Southbound plugins and a RESTCONF northbound plugin is in the same VM.

This way, it is possible to use the SDN controller to manage and configure Open vSwitch instances over the HTTP REST API – according to RESTCONF specification and YANG models of OVSDB and OpenFlow plugins. See the implementation of Ansible roles and Ansible module for more details regarding specific RESTCONF requests usage.

In addition to the RESTCONF requests, Ansible uses also CLI commands over SSH to create namespaces, veth links and to connect them with a bridge instance created in respective Open vSwitch instance. OpeFlow flows are used as for configuration of rules forwarding packets from one port of virtual bride to another (and vice versa). All flow configurations are being sent as RESTCONF requests to the SDN controller which forwards them to the specific bridge of specific Open vSwitch instance.

Starting our Datacenters

We will also use a simple testing command for each DC VM. The command sends 3 ICMP echo requests from the tenant’s namespace, to the other tenant’s namespace in another DC VM. The requests are being sent in an infinite loop (using the watch command). Therefore, at first, they will show that the namespaces don’t exist (because they really don’t, according to Figure 1).

But after the successful provisioning and configuration of the desired virtual infrastructure, they should show that the ping command was successful, as we will demonstrate later.

Figure 2 and Figure 3 show an example of the output of the testing commands.

DataCenter-1:

watch sudo ip netns exec t1-s1 ping -c 3 10.10.1.2

Figure 2 for DataCenter-1

DataCenter-2:

watch sudo ip netns exec t1-s2 ping -c 3 10.10.1.1

Figure 3 for DataCenter-2

The playbooks and the Ansible module made for this demo can be executed directly from CLI (see README.md) or used with AWX. We have made a workflow template in AWX, which executes playbooks in the correct order. You can see the graph of execution in Figure 4:

Figure 4: Graph of Playbook Execution

Tenant provisioning with SFC and vFW SF workflow execution

After successful execution of the workflow, we have a provisioned and configured network namespace in each DC VM, which is connected to the newly created bridge in each respective Open vSwitch. There’s also another namespace in DC-1 VM which is connected to the bridge while using two veth links.

  • One link for traffic to/from tenant’s namespace
  • One for traffic to/from the tunnel interface

The tunnel interface is also connected to the bridge and uses the VXLAN ID 123. The bridge in each Open vSwtich instance is configured by adding OpenFlow flows forwarding packets between ports of the bridge. The network namespace in the DC 1 VM called v-fw, is used for an SF, which implements a virtual firewall using Linux bridge (br0) and iptables.

After provisioning of the SF, both veth links are connected into the Linux bridge and iptables are empty so all packets (frames) can traverse the virtual firewall SF. The resulting system is displayed in Figure 5.

Figure 5 – Resulting structure of our system

Resulting system after successful provisioning

Now, if we would check the testing ping commands, we can see that the ICMP packets are able to reach the destination, see Figure 6. and Figure 7.

Figure 6 – Successful ping commands in each direction (DataCenter-1)

Figure 7 – Successful ping commands in each direction (DataCenter-2)

The Path of ICMP Packets

In Figure 8, there are details of OpenFlow flows configured during provisioning in each bridge in the Open vSwtich instances.

The ICMP packet is not the first packet in communication, initiated by the ping command. It is an ARP broadcast, which traverses the same path as ICMP packets.

Here’s a description of the path traversed by ICMP packets from t1-s1 to t1-s2:

  1. ICMP request message starting from the namespace t1-s1, with the destination in t1-s2. It is forwarded from veth1 interface of t1-s1 and received in the interface t1-s1-veth1 of the bridge t1-br1
  2. There is a configured flow, which forwards every single L2 frame (not only L3 packet) received at port t1-s1-veth1, to the port v-fw-veth2. This is connected into the v-fw namespace
  3. The packet is received at veth2 of v-fw and is forwarded by br0 to veth3. Since there aren’t any iptables rules configured yet, the packet is forwarded through veth3 outside the v-fw SF
  4. The packet is received at v-fw-veth3 port of t1-br1 and there is an applied rule for forwarding each packet (frame) received at v-fw-veth3, directly to the tun123 port
  5. The port tun123 is a VXLAN tunnel interface, with VNI (VXLAN Network Identifier) set to 123. Each L2 frame traversing the tun123 port is encapsulated into VXLAN frame (outer Ethernet, IP, UDP and VXLAN headers are added before the original – inner – L2 frame)
  6. Now the destination IP address of the packet becomes 192.168.10.20 (according to the configuration of the VXLAN tunnel interface) and is forwarded by the networking stack of the DC-1 VM’s host OS, through the enp0s8 interface to the underlying network
  7. The underlying network forwards the packet to DC-2 VM, through the enp0s8 and the host OS networking stack. It finally reaches the tun123 interface of the bridge t1-br2
  8. The VXLAN packet is decapsulated and the original ICMP packet is forwarded from tun123 to t1-s2-veth1, according to the OpenFlow flows in t1-br2
  9. Now, it is received in t1-s2 at the interface of veth1 and processed by the networking stack in the namespace. The ICMP echo reply is sent through the same path back to the t1-s1 in the DC-1 VM

Figure 8: Detail of the OpenFlow flows configured in the bridges in Open vSwitch instances

Using the iptables as a Virtual Firewall

Now, we can use the Virtual Firewall SF and apply some iptables rules to the traffic, generated by the ping commands. Here is an example of the command, submitted at DC1 VM, which executes the iptables command in the v-fw namespace. The iptables command adds a rule to the FORWARD chain, which drops an ICMP echo (request) packets and the rule is applied at veth3 interface:

sudo ip netns exec v-fw iptables -A FORWARD -p icmp --icmp-type 8 -m state --state NEW,ESTABLISHED,RELATED -m physdev --physdev-in veth3 -j DROP

Result of this rule can be seen in Figure 9 and Figure 10. The ICMP traffic has now dropped. Not only in case of direction from DC2 to DC1, but also in the reverse direction (from DC1 to DC2), which also works.

Figure 9: The ping commands after the iptables role applied (DataCenter-1)

Figure 10: The ping commands after the iptables role applied (DataCenter-2)

Ansible playbooks & module

We’re going to take a closer look at the playbooks. These are used for the provisioning of the infrastructure above. We will follow the graph from Figure 4, where the workflow template execution is shown. The template only executes job templates, which are associated with exactly one playbook.

Each playbook made for this demo uses the same configuration structure, which has been designed to be easily extensible and re-usable for other scenarios, which use the same playbooks.

See the following block, showing the example configuration used for this demo. The same configuration in JSON format is stored in our repository, as cfg/cfg_example.json. Here are the main decisions, which will help us to achieve the desired re-usability:

  1. The configuration contains a list of configurations for each DC, called cfg_list. The playbook performs a lookup of the specific configuration, according to items “id” or “tun_local_ip“, which must match with the DC’s hostname or IP address respectively
  2. The single cfg_list item related to specific DC may include a configuration of SFC as “sf_cfg” item. If this configuration exists, then the SFC provisioning playbook (apb_set_chaining.yaml) will be executed for this DC (DC1 is an example in this demo)
  3. For the DC which doesn’t have specific SFC configuration, simple provisioning (apb_set_flows.yaml) is used, which interconnects tenant’s namespace with the tunnel interface (DC2 is an example in this demo)
ansible_sudo_pass: admin
sdn_controller_url: 'http://192.168.10.5:8888'
cfg_list:
  - id: swdev-dc-1
    bridge_name: t1-br1
    topology_id: 1
    of_ctrl: 'tcp:192.168.10.5:6633'
    server_name: t1-s1
    server_ip: 10.10.1.1
    server_veth: veth1
    server_of_port_id: 100
    tun_vxlan_id: 123
    tun_local_ip: 192.168.10.10
    tun_remote_ip: 192.168.10.20
    tun_of_port_id: 200
    sf_cfg:
      sf_id: v-fw
      con_left:
        name: veth2
        port_id: 10
      con_right:
        name: veth3
        port_id: 20
  - id: swdev-dc-2
    bridge_name: t1-br2
    topology_id: 2
    of_ctrl: 'tcp:192.168.10.5:6633'
    server_name: t1-s2
    server_ip: 10.10.1.2
    server_veth: veth1
    server_of_port_id: 100
    tun_vxlan_id: 123
    tun_local_ip: 192.168.10.20
    tun_remote_ip: 192.168.10.10
    tun_of_port_id: 200

Job Template Descriptions

Here is a short description of each job template & related playbook together, with the source code of the playbook. These include more detailed comments regarding the steps (tasks and roles) executed by each play of the playbook:

1. SET_001_SDN_tenant_infra is the job template which executes playbook apb_set_tenant.yaml. This playbook creates and connects tenant’s namespaces, bridges, veth links and VXLAN tunnel. See Figure 11, for how does the system look like after the execution of this playbook.

Figure 11: The state of the infrastructure after the execution of SET_001_SDN_tenant_infra template

# Read the configuration of the current host and create tenant's namespace and veth.
# Connect one end of the veth to the namespace and configure IP address and set the
# interface UP.
# Configure also OVS to use SDN controller as OVSDB manager.
- hosts: all
  serial: 1
  roles:
    - read_current_host_cfg
    - set_tenant_infra
 
# Send request to the SDN controller which connects the SDN controller to the OVS.
# Use SDN controller to create tenant's bridge in the OVS and connect
# the veth also into the bridge.
# Create also tunnel interface using VXLAN encapsulation and connect
# it to the bridge. (Linux kernel of the host OS will take care of
# forwarding packets to/from the tunnel)
- hosts: all
  connection: local
  serial: 1
  roles:
    - set_tenant_infra_cfg
 
# Setup OpenFlow connection from the bridge in OVS to the SDN controller.
# It must be done here because when the bridge connects to OpenFlow controller it deletes
# all interfaces and flows configured previously
- hosts: all
  serial: 1
  roles:
    - role: cfg_ovs_br_ctrl
      when: current_host_cfg.of_ctrl is defined

2. SET_002_SF_chaining executes playbook apb_set_chaining.yaml, which is executed for hosts which have specified the “sf_cfg” item in their configuration so in this demo it is DC1 only. This playbook runs in parallel with the SET_002_tenant_flows template and it creates a namespace for SF and required two veth links and connections. The playbook also configures the corresponding OpenFlow flows in the bridge.

See Figure 12, where you can see the difference after the successful execution.

Figure 12: The state of the infrastructure after the execution of SET_002_SF_chaining template

# Find the configuration for the current host and configure SFC infrastructure
# if the configuration contains item: sf_cfg
# The SFC infrastructure consists of two veth links and one namespace for the SF.
# Connect both links to the SF namespace by one side and to the tenant's bridge by
# the other side. Set links to UP state.
- hosts: all
  roles:
    - read_current_host_cfg
    - role: set_sf_chaining_infra
      vars:
        con_defs:
          - "{{ current_host_cfg.sf_cfg.con_left }}"
          - "{{ current_host_cfg.sf_cfg.con_right }}"
      when: current_host_cfg.sf_cfg is defined
 
# Use the SDN controller to configure OpenFlow flows which set up the forwarding
# of packets between ports in the tenant's bridge (veth links and the VXLAN tunnel).
- hosts: all
  connection: local
  roles:
    - role: set_sf_chaining_cfg
      when: current_host_cfg.sf_cfg is defined

3. SET_002_tenant_flows executes playbook apb_set_flows.yaml which is executed for hosts, which don’t have thesf_cfg” item specified in their configuration, in this demo it is DC2. This playbook is executed parallel to SET_002_SF_chaining template and it just configures OpenFlow flows in the bridge, since everything has already been created and connected by SET_001_SDN_tenant_infra template. See Figure 13.

Figure 13: The state of the infrastructure after the execution of SET_002_tenant_flows template

# When the sf_cfg item of the current host's configuration is not defined then
# use the SDN controller and configure simple flows forwarding packets between
# veth link and the VXLAN tunnel interface.
- hosts: all
  connection: local
  roles:
    - read_current_host_cfg
    - role: set_tenant_flows
      when: current_host_cfg.sf_cfg is not defined

4. SET_003_SF_bridge is the job template, executing apb_set_bridge.yaml playbook. This creates a Linux bridge in the SF namespace for hosts with the “sf_cfg” defined. Both veth links of the SF namespace are connected to the Linux bridge. See Figure 14.

Figure 14: The state of the infrastructure after the execution of SET_003_bridge template

# Create linux bridge in the SF namespace and connect veth links to the
# linux bridge and set the bridge UP.
- hosts: all
  roles:
    - read_current_host_cfg
    - role: create_bridge_in_ns
      vars:
        net_ns_name: "{{ current_host_cfg.sf_cfg.sf_id }}"
        int_list:
          - "{{ current_host_cfg.sf_cfg.con_left.name }}"
          - "{{ current_host_cfg.sf_cfg.con_right.name }}"
      when: current_host_cfg.sf_cfg is defined

Playbooks used in this demo are not idempotent, so they can’t be executed successfully multiple times in a row. Instead, we have implemented two groups of playbooks.

One group sets all the stuff used in this demo. Another group deletes and un-configures everything and ignores possible errors of particular steps (tasks) so you can use it for cleaning of the setup, if necessary.

De-provisioning

We have also implemented three playbooks, which delete changes made by the provisioning playbooks. We have created a workflow in AWX running these playbooks in the correct order.

See the playbook sources and the README.md file for more information.

Figure 15: Tenant de-provisioning with SFC and vFW SF workflow execution

Multitenancy and isolation

In Figure 16, there’s an example where the second tenant is added. The tenant has two servers – one in each DC. The setup for the second tenant is simple, without SFC. As you can see the tenant has its own virtual bridge in each Open vSwitch instance.

IP addresses in the new tenant’s servers are identical with IP addresses in the first tenant servers. This is possible due to the traffic of each tenant being isolated, using extra virtual bridges and different VXLAN tunnels for each tenant or service.

The new tenant can be added, using the same group of playbooks and the same workflow template. The only thing which needs to be changed is the configuration used with the workflow template. See the example below in Figure 16.

Figure 16: After the second tenant is added

Configuration example for the second tenant provisioning:

ansible_sudo_pass: admin
sdn_controller_url: 'http://192.168.10.5:8888'
cfg_list:
  - id: swdev-dc-1
    bridge_name: t2-br1
    topology_id: 3
    of_ctrl: 'tcp:192.168.10.5:6633'
    server_name: t2-s1
    server_ip: 10.10.1.1
    server_veth: veth4
    server_of_port_id: 100
    tun_vxlan_id: 456
    tun_local_ip: 192.168.10.10
    tun_remote_ip: 192.168.10.20
    tun_of_port_id: 200
  - id: swdev-dc-2
    bridge_name: t2-br2
    topology_id: 4
    of_ctrl: 'tcp:192.168.10.5:6633'
    server_name: t2-s2
    server_ip: 10.10.1.2
    server_veth: veth4
    server_of_port_id: 100
    tun_vxlan_id: 456
    tun_local_ip: 192.168.10.20
    tun_remote_ip: 192.168.10.10
    tun_of_port_id: 200

Role of the SDN controller

In this demo, we have used an SDN controller based on lighty.io for the configuration of Open vSwitch instances over OVSDB and OpenFlow protocols. The usage of the SDN controller brings the following benefits:

1. Unified REST HTTP API: The data of orchestrated devices (in this case Open vSwitch instances) which allows building upper layer services using this API (e.g. dashboards, analytics, automation and more).

One example of such an application is OFP (OpenFlow Manager), already mentioned in our previous blog.

In Figure 17, a screenshot of the Web UI of the OpenFlow is shown, shoving data into four OpenFlow bridges, which are configured after the second tenant has been added. You can see also details of specific flow matching, in port with the action of the type Output port.

2. Caching: The controller allows for caching of the state and configuration of the orchestrated device. The cached configuration can be persistently stored and pushed into the device after re-start of the managed device.

3. Addition logic implementation:e.g. monitoring, resource management, authorization, and accounting or decision making according to an overview of the whole network.

4. Additional deployment: Controllers can be deployed as a geo-distributed clustered registry of multi-vendor networking devices, of the whole core DC network, backbone networks or Edge DC networks.

This way, you can access all the data you need from all of your networking devices, on the path from the core DC to the edge and RAN (Radio Access Network).

5. Underlay Networking: The SDN controllers, based on lighty.io, can be used not only for configuration, orchestration, and monitoring of virtualized networking infrastructure and overlay networking, but also for underlay networking.

Figure 17: Screenshot of the OFM WEB UI, of the SDN controllers OpenFlow data

What have we done today?

We have demonstrated the usage of Ansible, AWX and an SDN controller based on lighty.io, for automation of provisioning of services for multi-tenant data centers.

We have implemented Ansible playbooks and modules for orchestration of virtualized networking infrastructure and provisioning of tenant’s containers (simulated by namespaces). The playbooks and the module can be re-used for multiple use cases and scenarios an can be extended according to specific needs.

You could see how to utilize Open vSwitch and its OpenFlow capable virtual bridges for the creation of an isolated, virtual network infrastructure and Service Function Chaining (SFC). Open vSwitch bridges can be replaced by any OpenFlow capable switch (virtual or bare metal) in real deployments in core data centers or at the edge.

Also, networking devices without OpenFlow support can be used if they implement some “SDN ready” management interface, e.g.: NETCONF, RESTCONF or gNMI.

All of the components used in this demo, including the SDN controller, can be easily integrated into microservices architectures and deployed as Cloud Native applications (services).

Adding upper layer services (dashboards, monitoring, automation, business logic, etc.) you can continuously build solutions for your SDN networks, edge computing or 5G networks.

If you’re interested in a commercial solution, feel free to contact us.

Tomáš Jančiga


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

PANTHEON.tech @ ONS 2019

Our PANTHEON.tech all-star team, comprised of Juraj VeverkaMartin Varga, Róbert Varga & Štefan Kobza visited the Open Networking Summit North America 2019, in the sunny town of San Jose, California.

In the span of 3 days, keynotes and presentations were held with notable personalities and companies from various network companies & technological giants.

“The networking part, was a welcoming mixture of old acquaintances & new faces interested in our products, solutions and in the company in general.”

“It was crucial to present our ideas, concepts & goals by short messages or few slides. Since the approximate time of our booth visit was 2-3 minutes, I had to precisely focus and point out why we matter.” (who is saying it?) This was not an easy task, since there were companies like  Ericsson, Intel, Huawei or IBM at the conference.

Our Skydive/VPP Demo

“Before the actual conference, we were asked to come up with potential demo ideas, which would be presented at ONS. Our SkyDive – VPP Integration demo was accepted and opened the doors to a wide interest in PANTHEON.tech.” As Juraj recollects, after the keynote, waves of visitors swarmed the nearest booths, including ours and our Skydive – VPP Integration demo:

Skydive is an open source, real-time network topology, and protocols analyzer. It aims to provide a comprehensive way of understanding what is happening in the network infrastructure.

We have prepared a demonstration topology containing running VPP instances. Any changes to the network topology are detected in real time.

Skydive is a tool that can quickly show, what is currently happening in your network. VPP serves the purpose of fast packet processing, which is one of the main building blocks of a (virtual) network. If anybody wants a fast network – VPP is the way to go.

Connecting Skydive & VPP is almost a necessity, since diagnosing VPP issues is a lot more difficult without using Skydive. Skydive leads you towards the source of the issue, as opposed to the time-consuming studying of the VPP console.

The importance of this demo is based on the need to extract data from VPP. Since the platform does not easily provide the required output, this provides a birds-eye view on the network via Skydive.

“The Skydive demo was a huge hit. Since it was presented in full-screen and had its own booth, we gained a lot of attention due to this unique solution.” Attendees were also interested in our Visibility Package solution and what kind of support PANTHEON.tech could provide for their future network endeavors.

Improvements & ONS 2020

“Inbetween networking, presenting and attending keynotes, we have managed to do some short sightseeing and enjoyed the city of San Jose a little bit. But of course, work could not have waited as we have been excited to come back to the conference atmosphere each day.”

But how was ONS 2019 in comparison to last years conference? “In comparison to last year, the Open Networking Summit managed to expand & increase in quality. We are glad that we made many new acquaintances, which we will hopefully meet again next year!”

We would like to thank the Linux Foundation Networking for this opportunity. See you next year!

[lighty.io] OpenFlow Integration

lighty.io 9.2.x provides examples of OVSDB & OpenFlow SDN controllers for integration with your SDN solution. Those examples will guide you through lighty.io controller core initialization, with OVSDB and/or OpenFlow southbound plugins attached. You can use those management protocols with a really small memory footprint and simple runtime packaging.

Today, we will show you how to run and integrate the OpenFlow plugin in lighty.

What is OpenFlow?

OpenFlow (OF) is a communications protocol, that gives access to the forwarding plane of a network switch or router over the network. OpenFlow can be applied for:

  • Quality of Service measurement by traffic filtering
  • Network monitoring (i.e., using the controller as a monitoring device)

In a virtual networking context, OF can be used to program virtual switches with tenant level segregation tags (for example VLANs). In the context of NFV, OF can be used to re-direct traffic to a chain of services. It is managed by the Open Networking Foundation.

Why do we need OpenFlow?

Routers and switches can achieve various (limited) levels of user programmability. However, engineers and managers need more than the limited functionality of this hardware. OpenFlow achieves consistent traffic management and engineering exactly for these needs. This is achieved by controlling the functions independently of the hardware used.

PANTHEON.tech has managed to implement the OpenFlow plugin in lighty-core. Today, we will show you how you can run the plugin yourself.

Prerequisites

In order to build and install lighty-core artifacts locally, follow the procedure below:

  1. Install JDK – make sure JDK 8 is installed
  2. Install maven – make sure you have Maven 3.5.0 or later installed
  3. Setup maven – make sure you have proper settings.xml in your ~/.m2 directory
  4. Download/clone lighty-core
  5. (Optional) Download/clone the OpenFlow Manager App

Build and Run OpenFlow plugin example

1. Download the lighty-core repository:

git clone https://github.com/PantheonTechnologies/lighty-core.git

2. Checkout lighty-core version 9.2.x:

git checkout 9.2.x

3. Build the lighty-core project with the maven command:

mvn clean install

This will download all important dependencies and create .zip archive in ‘lighty-examples/lighty-community-restconf-ofp-app’ directory.

Extract this archive and start ‘start-ofp.sh’ script or run .jar file using Java 8 with the command:

java -jar lighty-community-restconf-ofp-app-9.2.1-SNAPSHOT.jar

Use custom config files

The previous command will run the application with default configuration. In order to run it with a custom configuration, edit (or create new) JSON configuration file. Example of JSON configuration can be found in lighty-community-restconf-ofp-app-9.2.1-SNAPSHOT folder.

With the previous command to start the application, pass the path to the configuration file as an argument:

java -jar lighty-community-restconf-ofp-app-9.2.1-SNAPSHOT.jar sampleConfigSingleNode.json

OpenFlow plugin and RESTCONF Configuration

An important configuration, which decides what can be changed, is stored in sampleConfigSingleNode.json. For applying all changes, we need to start OFP with this configuration as a Java parameter.

ForwardingRulesManager

FLOWs can be added to OpenFlow Plugin (OFP) in two ways:

  • Sending FLOW to config data-store. ForwardingRulesManager (FRM) is listening to the config-data store. When it’s changed. the config data store FRM will sync changes on the connected device, once available.
    Flow added this way is persistent.
  • Sending a RPC message directly to device. This option works without FRM. When is device restarted, then this the configuration will disappear.

In case you need to disable FRM, start OFP with an external configuration & set the enableForwardingRulesManager to false. Then, simply start OFP with this external configuration.

RESTCONF

RESTCONF configuration can be changed in the JSON config file sampleConfigSingleNode.json, mentioned above. It is possible to change the RESTCONF port, IP address or version of RESTCONF. Currently, the version of RESTCONF set to DRAFT_18, but it can be set to DRAFT_02.

"restconf": {
    "httpPort": 8888,
    "restconfServletContextPath": "/restconf",
    "inetAddress": "0.0.0.0",
    "jsonRestconfServiceType": "DRAFT_18"
  },

How to start the OpenFlow example

Firstly, start the OpenFlow example application.

Make sure to read through this guide on how to install mininet.

Next step is to start mininet, with at least one Open vSwitch (use version 2.2.0 and higher).

sudo mn --controller=remote,ip=<IP_OF_RUNNING_LIGHTY> --topo=tree,1 --switch ovsk,protocols=OpenFlow13

For this explanation of OFP usage, RESTCONF is set to DRAFT_18. All RESTCONF calls used in this example can be imported from file OFP_postman_collection.json, in project resources to Postman.

We will quickly check if the controller is the owner of the connected device. If not, then the controller is not running or  the device is not properly connected:

curl --request GET \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/entity-owners:entity-owners \
  --header 'Authorization: Basic YWRtaW46YWRtaW4='

If you followed the instructions and there is a single controller running (not cluster) and only one device connected, the result is:

{
  "entity-owners": {
    "entity-type": [{
      "type": "org.opendaylight.mdsal.ServiceEntityType",
      "entity": [{
        "id": "/odl-general-entity:entity[name='openflow:1']",
        "candidate": [{
          "name": "member-1"
        }],
        "owner": "member-1"
      }]
    }, {
      "type": "org.opendaylight.mdsal.AsyncServiceCloseEntityType",
      "entity": [{
        "id": "/odl-general-entity:entity[name='openflow:1']",
        "candidate": [{
          "name": "member-1"
        }],
        "owner": "member-1"
      }]
    }]
  }
}

Let’s get information about the connected device. If you want to see all OFP inventory, use ‘get inventory‘ call from PostmanCollection.

From config:

curl -k --insecure --request GET \
  --url http:///<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A1 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4='

From operational:

curl -k --insecure --request GET \
  --url http:///<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A1?content=nonconfig \
  --header 'Authorization: Basic YWRtaW46YWRtaW4='

JSON result starts with:

{
  "node": [{
    "id": "openflow:1",
    "node-connector": [{
      "id": "openflow:1:LOCAL",
      "flow-node-inventory:peer-features": "",
      "flow-node-inventory:advertised-features": "",
      "flow-node-inventory:port-number": 4294967294,
      "flow-node-inventory:hardware-address": "4a:15:31:79:7f:44",
      "flow-node-inventory:supported": "",
      "flow-node-inventory:current-speed": 0,
      "flow-node-inventory:current-feature": "",
      "flow-node-inventory:state": {
        "live": false,
        "link-down": true,
        "blocked": false
      },
      "flow-node-inventory:maximum-speed": 0,
      "flow-node-inventory:name": "s1",
      "flow-node-inventory:configuration": "PORT-DOWN"
    }, {
      "id": "openflow:1:2",
      "flow-node-inventory:peer-features": "",
      "flow-node-inventory:advertised-features": "",
      "flow-node-inventory:port-number": 2,
      "flow-node-inventory:hardware-address": "fa:c3:2c:97:9e:45",
      "flow-node-inventory:supported": "",
      "flow-node-inventory:current-speed": 10000000,
      "flow-node-inventory:current-feature": "ten-gb-fd copper",
      "flow-node-inventory:state": {
        "live": false,
        "link-down": false,
        "blocked": false
      },
      .
      .
      .

Add FLOW

Now try to add table-miss flow, which will modify the switch to send all not-matched-packets to the controller via packet-in messages.

To config data-store:

curl --request PUT \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A1/table=0/flow=1 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Content-Type: application/xml' \
  --data '<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<flow xmlns="urn:opendaylight:flow:inventory">
   <barrier>false</barrier>
   <cookie>54</cookie>
   <flags>SEND_FLOW_REM</flags>
   <flow-name>FooXf54</flow-name>
   <hard-timeout>0</hard-timeout>
   <id>1</id>
   <idle-timeout>0</idle-timeout>
   <installHw>false</installHw>
   <instructions>
       <instruction>
           <apply-actions>
               <action>
                   <output-action>
                       <max-length>65535</max-length>
                       <output-node-connector>CONTROLLER</output-node-connector>
                   </output-action>
                   <order>0</order>
               </action>
           </apply-actions>
           <order>0</order>
       </instruction>
   </instructions>
   <match/>
   <priority>0</priority>
   <strict>false</strict>
   <table_id>0</table_id>
</flow>'

Directly to a device via RPC call:

curl --request POST \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/operations/sal-flow:add-flow \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Content-Type: application/json' \
  --data '{
    "input": {
      "opendaylight-flow-service:node":"/opendaylight-inventory:nodes/opendaylight-inventory:node[opendaylight-inventory:id='\''openflow:1'\'']",
      "priority": 0,
      "table_id": 0,
      "instructions": {
        "instruction": [
          {
            "order": 0,
            "apply-actions": {
              "action": [
                {
                  "order": 0,
                  "output-action": {
                    "max-length": "65535",
                    "output-node-connector": "CONTROLLER"
                  }
                }
              ]
            }
          }
        ]
      },
      "match": {
      }
    }
}

Get FLOW

Check if flow is in data-store:

In the config:

curl --request GET \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A1/table=0 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4='

In operational:

curl --request GET \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A1/table=0?content=nonconfig \
  --header 'Authorization: Basic YWRtaW46YWRtaW4='

Result:

{
    "flow-node-inventory:table": [
        {
            "id": 0,
            "opendaylight-flow-table-statistics:flow-table-statistics": {
                "active-flows": 1,
                "packets-looked-up": 14,
                "packets-matched": 4
            },
            "flow": [
                {
                    "id": "1",
                    "priority": 0,
                    "opendaylight-flow-statistics:flow-statistics": {
                        "packet-count": 4,
                        "byte-count": 280,
                        "duration": {
                            "nanosecond": 936000000,
                            "second": 22
                        }
                    },
                    "table_id": 0,
                    "cookie_mask": 0,
                    "hard-timeout": 0,
                    "match": {},
                    "cookie": 54,
                    "flags": "SEND_FLOW_REM",
                    "instructions": {
                        "instruction": [
                            {
                                "order": 0,
                                "apply-actions": {
                                    "action": [
                                        {
                                            "order": 0,
                                            "output-action": {
                                                "max-length": 65535,
                                                "output-node-connector": "CONTROLLER"
                                            }
                                        }
                                    ]
                                }
                            }
                        ]
                    },
                    "idle-timeout": 0
                }
            ]
        }
    ]
}

Get FLOW directly from modified device s1 in the command line:

sudo ovs-ofctl -O OpenFlow13 dump-flows s1

Device result:

cookie=0x36, duration=140.150s, table=0, n_packets=10, n_bytes=700, send_flow_rem priority=0 actions=CONTROLLER:65535

Update FLOW

This works the same as add flow. OFP will find openflow:1, table=0, flow=1 from url and update changes from the body.

Delete FLOW

From config data-store:

curl --request DELETE \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A1/table=0/flow=1 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Content-Type: application/xml' \
  --data ''

Via RPC calls:

curl --request POST \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/operations/sal-flow:remove-flow \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Content-Type: application/json' \
  --data '{
    "input": {
      "opendaylight-flow-service:node":"/opendaylight-inventory:nodes/opendaylight-inventory:node[opendaylight-inventory:id='\''openflow:1'\'']",
      "table_id": 0
    }
}'

Proactive flow installation & traffic monitor via Packet-In messages

In order to create traffic, we need to setup topology behavior. There are three methods for flow table population (Reactive, Proactive, Hybrid). We encourage you to read more about their differences.

In our example, we use Proactive flow installation, which means that we proactively create flows before traffic started.

  1. Start lighty OpenFlow example lighty-community-restconf-ofp-app
  2. Create mininet with linear topology and 2 open vSwitches
sudo mn --controller=remote,ip=<IP_OF_RUNNING_LIGHTY>:6633 --topo=linear,2 --switch ovsk,protocols=OpenFlow13

Now we tested that connection between devices is not established:

mininet> pingall
*** Ping: testing ping reachability
h1 -> X
h2 -> X
*** Results: 100% dropped (0/2 received)

Next step is to create flows, that create a connection between switches. This will be managed by setting Matched and Action filed in Table 0. In this example, we connect two ports eth1 and eth2 in a switch together. So everything that comes from port eth1, is redirected to port eth2 and vice versa.

Next configuration will be sending monitoring packets in a network. This configuration has set the switch s1 to send all packets, which came to port eth2, to the controller. So the visualized result should look like this:

1. Add flow to switch s1 (named in OFP as ‘openflow:1’), which redirect all traffic that comes from port eth1 (in OFP named as ‘1’) to port eth2 (in OFP named as ‘2’).

curl --request PUT \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A1/table=0/flow=0 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Content-Type: application/json' \
  --data '{
      "flow": [
          {
              "table_id": "0",
              "id": "0",
              "priority": "10",
              "match": {
                  "in-port": "openflow:1:1"
              },
              "instructions": {
                  "instruction": [
                      {
                          "order": 0,
                          "apply-actions": {
                              "action": [
                                  {
                                      "order": 0,
                                      "output-action": {
                                          "output-node-connector": "2",
                                          "max-length": "65535"
                                      }
                                  }
                              ]
                          }
                      }
                  ]
              }
          }
      ]
}'

2. Add flow to switch s1, which will connect port 2 and 1 in another direction. Set switch s1 to send all packet transmitted through port 2 to controller:

curl --request PUT \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A1/table=0/flow=1 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Content-Type: application/json' \
  --data '{
    "flow": [
        {
            "table_id": "0",
            "id": "1",
            "priority": "10",
            "match": {
                "in-port": "openflow:1:2"
            },
            "instructions": {
                "instruction": [
                    {
                        "order": 0,
                        "apply-actions": {
                            "action": [
                                {
                                    "order": 0,
                                    "output-action": {
                                        "output-node-connector": "1",
                                        "max-length": "65535"
                                    }
                                },
                                {
                                    "order": 1,
                                    "output-action": {
                                        "output-node-connector": "CONTROLLER",
                                        "max-length": "65535"
                                    }
                                }
                            ]
                        }
                    }
                ]
            }
        }
    ]
}'

3. Check all added flows at s1 switch:

{
    "flow-node-inventory:table": [
        {
            "id": 0,
            "opendaylight-flow-table-statistics:flow-table-statistics": {
                "active-flows": 1,
                "packets-looked-up": 317,
                "packets-matched": 273
            },
            "flow": [
                {
                    "id": "1",
                    "priority": 10,
                    "opendaylight-flow-statistics:flow-statistics": {
                        "packet-count": 0,
                        "byte-count": 0,
                        "duration": {
                            "nanosecond": 230000000,
                            "second": 5
                        }
                    },
                    "table_id": 0,
                    "cookie_mask": 0,
                    "hard-timeout": 0,
                    "match": {
                        "in-port": "openflow:1:2"
                    },
                    "cookie": 0,
                    "flags": "",
                    "instructions": {
                        "instruction": [
                            {
                                "order": 0,
                                "apply-actions": {
                                    "action": [
                                        {
                                            "order": 0,
                                            "output-action": {
                                                "max-length": 65535,
                                                "output-node-connector": "1"
                                            }
                                        },
                                        {
                                            "order": 1,
                                            "output-action": {
                                                "max-length": 65535,
                                                "output-node-connector": "CONTROLLER"
                                            }
                                        }
                                    ]
                                }
                            }
                        ]
                    },
                    "idle-timeout": 0
                }
            ]
        }
    ]
}

 

4. Add flow to switch s2 to connect port 1 and 2:

curl --request PUT \
  --url http://<IP_OF_RUNNING_LIGHTY>:8888/restconf/data/opendaylight-inventory:nodes/node=openflow%3A2/table=0/flow=0 \
  --header 'Authorization: Basic YWRtaW46YWRtaW4=' \
  --header 'Content-Type: application/json' \
  --data '{
    "flow": [
        {
            "table_id": "0",
            "id": "0",
            "priority": "10",
            "match": {
                "in-port": "openflow:2:1"
            },
            "instructions": {
                "instruction": [
                    {
                        "order": 0,
                        "apply-actions": {
                            "action": [
                                {
                                    "order": 0,
                                    "output-action": {
                                        "output-node-connector": "2",
                                        "max-length": "65535"
                                    }
                                }
                            ]
                        }
                    }
                ]
            }
        }
    ]
}'

5. Check all added flow at s2 switch:

{
    "flow-node-inventory:table": [
        {
            "id": 0,
            "opendaylight-flow-table-statistics:flow-table-statistics": {
                "active-flows": 2,
                "packets-looked-up": 294,
                "packets-matched": 274
            },
            "flow": [
                {
                    "id": "0",
                    "priority": 10,
                    "opendaylight-flow-statistics:flow-statistics": {
                        "packet-count": 0,
                        "byte-count": 0,
                        "duration": {
                            "nanosecond": 388000000,
                            "second": 7
                        }
                    },
                    "table_id": 0,
                    "cookie_mask": 0,
                    "hard-timeout": 0,
                    "match": {
                        "in-port": "openflow:2:1"
                    },
                    "cookie": 0,
                    "flags": "",
                    "instructions": {
                        "instruction": [
                            {
                                "order": 0,
                                "apply-actions": {
                                    "action": [
                                        {
                                            "order": 0,
                                            "output-action": {
                                                "max-length": 65535,
                                                "output-node-connector": "2"
                                            }
                                        }
                                    ]
                                }
                            }
                        ]
                    },
                    "idle-timeout": 0
                },
                {
                    "id": "1",
                    "priority": 10,
                    "opendaylight-flow-statistics:flow-statistics": {
                        "packet-count": 0,
                        "byte-count": 0,
                        "duration": {
                            "nanosecond": 98000000,
                            "second": 3
                        }
                    },
                    "table_id": 0,
                    "cookie_mask": 0,
                    "hard-timeout": 0,
                    "match": {
                        "in-port": "openflow:2:2"
                    },
                    "cookie": 0,
                    "flags": "",
                    "instructions": {
                        "instruction": [
                            {
                                "order": 0,
                                "apply-actions": {
                                    "action": [
                                        {
                                            "order": 0,
                                            "output-action": {
                                                "max-length": 65535,
                                                "output-node-connector": "1"
                                            }
                                        }
                                    ]
                                }
                            }
                        ]
                    },
                    "idle-timeout": 0
                }
            ]
        }
    ]
}

Now, when we try to ping all devices in mininet, we will receive positive feedback:

mininet> pingall
*** Ping: testing ping reachability
h1 -> h2
h2 -> h1
*** Results: 0% dropped (2/2 received)

Show Packet-In messages with Wireshark

Wireshark is a popular network protocol analyzer. It lets you analyze everything that is happening in your network and is a necessity for network administrators and power-users alike.

Start Wireshark with the command:

sudo wireshark

After it starts, double click on ‘any’ filter. Then, filter packets with ‘openflow_v4.type == 10‘. Now, the Wireshark environment setup will only show Packet-In messages from OpenFlow protocol.

To create traffic in mininet network, we used mininet command:

h2 ping h1

If everything is setup correctly, then we can see Packet-In messages showing up.

Show Packet-In messages with PacketInListener

In the below section for Java developers, there is an option for setting Packet-In Listener. This configuration set to log every Packet-In message as a TRACE log to console. When this is done, run mininet command ‘h2 ping h1’ again.

If everything is set up correctly, then we can see Packet-In messages in logs.

Java DevelopersSome configuration can be done in Java Main class from OpenFlow Protocol example.

Packet-in listener

Packet handling is managed by adding a Packet Listener. In our example, we add the PacketInListener class, which will be logging packet-in messages. For a new packet listener class, it is important to have implemented the class “PacketProcessingListener”.

The first step is to create an instance of PacketInListener (1 – in the window below). Then we will add OpenflowSouthboundPluginBuilder to this part of the code (2 – in the window below).

//3. start openflow SBP
     PacketProcessingListener packetInListener = new PacketInListener();           (1)
     final OpenflowSouthboundPlugin plugin;
     plugin = new OpenflowSouthboundPluginBuilder()
             .from(configuration, lightyController.getServices())
             .withPacketListener(packetInListener)                                 (2)
             .build();
     ListenableFuture<Boolean> start = plugin.start();

OpenFlow Manager (OFM) App

OpenFlow Manager (OFM) is an application developed to run on top of ODL/Lighty to visualize OpenFlow (OF) topologies, program OF paths, gather OF stats and for managing flow tables. In order to install the OFM App, follow these steps:

1. Download OFM repositories from GitHub and checkout master branch

git clone https://github.com/PantheonTechnologies/OpenDaylight-Openflow-App.git
git checkout master

2. NGINX installation

NGINX is used to serve as a proxy server towards OFM application and ODL/lighty RESTCONF interface. Before NGINX starts, it is important to the switch NGINX config file in /etc/nginx/sites-enabled/ with the default file in the root of this project.

In this default file, we have set up the NGINX port, port for RESTCONF and Grunt port. Please be sure that these ports are correct.

After replacing the config file, start NGINX with the command:

sudo apt install nginx

If  you need to stop NGINX, type in the command:

sudo systemctl stop nginx

3. OFM configuration
Before running the OFM standalone application, it is important to configure:

  • The controller base URL
  • NGINX port
  • ODL username and ODL/lighty password

All this information should be written in env.module.js file located in the directory ofm src/common/config.

4. Grunt installation
To run OFM standalone app on local web server, you can use tool Grunt. For this tool is everything prepared in the OFM repository. Grunt is installable via npm, the Node.js package manager.After running grunt and NGINX you can access OFM standalone app via web browser on used NGINX port typing URL localhost:9000.

After running Grunt and NGINX, you can access OFM standalone app via a web browser on a used NGINX port by typing the URL localhost:9000.

OpenFlow Manager environment

With OFM, we can also start the Lighty-OpenFlow Southbound plugin example. From lighty-core repository, follow the example above. In this example, we will use mininet to simulate network topology, start with the command:

sudo mn --controller=remote,ip=<IP_OF_RUNNING_LIGHTY>:6633 --topo=linear,2 --switch ovsk,protocols=OpenFlow13

If everything is set up correctly, then you should see a basic view of the network topology:

Device management

To see detailed device information, select those devices that you want to inspect. Then click on the main menu bar, at the top of “Flow management” section. Now, you should see device information and added Flows.

In a Flow section, you can Add Flow by clicking on the pen on the left-top side of the table marked by an arrow. In the right side of table at each row, you can delete or update selected flow.

Adding Flows

Adding a flow can be done from the Flow management section by clicking on the pen, on the left-top side of Flow table. Here you can choose the switch, from where the flow should be sent. Then just click on the property that should be added in the flow, from the left menu.

After filling all required fields, you can view your flow as a JSON message, by clicking on the “show preview” button. For sending flows to a switch, click on “Send message” button.

Statistics

To show statistics in the network click on the “Statistics” section in the main menu bar.  Then, choose from the drop-down menu what statistic you want to see and click on “Refresh data” button.

Conclusion

We have managed to integrate the whole OpenFlow plugin into lighty-core. This is the same way it is implemented in OpenDaylight, version Fluorine (SR1). It is, therefore, possible to use lighty-controller for managing network devices via the OpenFlow protocol.

If you would like to see how our commercial version might benefit your business, please visit lighty.io for more information, or contact us here.

Vector Packet Processing 104: gRPC & REST

Welcome back to our Vector Packet Processing implementation guide, Part 4.

Today, we will go through the essentials of gRPC and REST and introduce their core concepts, while introducing one missing functionality into our VPP build. This part will also introduce the open-source GoVPP, Go language bindings for binary API – VPP management.

The Issue

Natively, VPP does not include a gRPC / RESTful interface. PANTHEON.tech has developed an internal solution, which utilizes a gRPC-gateway to VPP, using GoVPP. It is called VPP-RPC, through which you can now connect to VPP using a REST client. 

In case you are interested in this solution, please contact us via our website or contact us at sales@pantheon.tech.


Introduction

First and foremost, here are the terms that you will come across in this guide:

  • gRPC (Remote Procedure Calls) is a remote procedure call (RPC) system initially developed by Google. It uses HTTP/2 for transport, protocol buffers as the interface description language, and provides features such as authentication, bidirectional streaming and flow control, blocking or non-blocking bindings, and cancellation and timeouts. It generates cross-platform client and server bindings for many programming-languages.
  • gRPC-gateway (GRPC-GW) is a plugin of protobuf. It reads the gRPC service definition and generates a reverse-proxy server which translates a RESTful JSON API into gRPC. This server is generated according to custom options in your gRPC definition.
  • VPP-Agent is aset of VPP-specific plugins that interact with Ligato, in order to access services within the same cloud. VPP Agent provides VPP functionality to client apps through a VPP model-driven API
  • VPP-RPC is our new RESTful VPP service. It utilizes gRPC & gRPC-gateway as 2 separate processes, in order to facilitate communication with VPP through GoVPP.

JSON message sequence

gRPC gateway exposes the REST service, for which there is no built-in support in VPP. By using gRPC-gateway and gRPC server services, the VPP API is available through gRPC and REST at the same time. Both services use generated models from VPP binary APIs, therefore exposing both APIs is easy enough to include both. It is up to the user to choose, which mechanism they will use.

When exchanging data between a browser and a server, the data can only be text. Any JavaScript object can be converted into a JSON and sent to the server. This allows us to process data as JavaScript objects, while avoiding complicated translations and parsing issues.

The sequence diagram below describes the path the original JSON message takes in our solution:

  1. The Client sends a HTTP request to GRPC-GW
  2. GRPC-GW transforms the JSON into protobuf message & sends it to the gRPC server
  3. Each RPC has a go method handling its logic. For unary RPCs, this simply means copying the protobuf message into the corresponding VPP message structure and passing it to GoVPP binary API
  4. GoVPP eventually passes the message to the underlying VPP binary API, where the desired functionality is executed

 

JSON message sequence diagram

VPP API build process

The figure below describes the build process for a single VPP API. Lets see what needs to happen for a tap API:

Build process for a VPP API. @PANTHEON.tech

VPP APIs are defined in /usr/share/VPP/api/ which is accessible after installing VPP. The Go package tap is a generated via the VPP Agent binary API of the ‘tap’ VPP module. It is generated from this file: tap.api.json.

This file will drive the creation of all 3 building blocks:

  • GoVPP’s binapi-generator generates the GoVPP file tap.ba.go
  • vpp-rpc-protogen generates tap.proto, which contains the gRPC messages and services, including the URL for each RPC
  • protoc’s go plugin will compile the proto file and create a gRPC stub tap.pb.go, containing client & server interfaces that define RPC methods and protobuf message structs.
  • Vpp-rpc-implementor will generate the code that implements the TapServer interface – the actual RPC methods calling GoVPP APIs – in Tap.server.go file.
  • protoc’s gRPC-gateway plugin will compile the proto file and create the reverse proxy tap.pb.gw.go. We don’t have to touch this file further.

Running the examples

To run all the above, we need to make sure all these processes are running:

  • VPP service
  • gRPC server (needs root privileges)

    sudo vpp-rpc-server
  • gRPC gateway

    vpp-rpc-gateway

Running a simple request using Curl

If we want to invoke the API we have created, we can use Curl. Vpe will require an URL (vpe/showversion/request), which will map the above mentioned API to VPP’s binary API. We will now use Curl to POST a request to a default address of localhost:8080:

curl -H "Content-Type: application/json" -X POST 127.0.0.1:8080/Vpe/ShowVersion/request --silent

{"Retval":0,"Program":"dnBlAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=","Version":"MTguMDctcmMwfjEyOC1nNmYxYzQ4ZAAAAAAAAAAAAAA=",
"BuildDate":"xaB0IG3DoWogIDMgMTQ6MTA6NTQgQ0VTVCAyMDE4AAA=",
"BuildDirectory":"L2hvbWUvcGFsby92cHA"}

The decoded reply says:

{
    "Retval": 0,
    "Program": "vpe",
    "Version": "18.07-rc0~128-g6f1c48d",
    "BuildDate": "Thu May  3 14:10:54 CEST 2018",
    "BuildDirectory": "/home/user/vpp"
}

Postman collection

We provide a Postman collection within our service, which serves as a starting point for users with our solution. The collection created in the vpp-rpc repository, tests/vpp-rpc.postman_collection.json path, contains various requests and subsequent tests to the Tap module.

Performance analysis in Curl

Curl can give us detailed timing analysis of a request’s performance. If we run the previous request 100 times, the summary times (in milliseconds) we get usually are:

mean=4.82364000000000000
min=1.047
max=23.070
average rr per_sec=207.31232015656226418223
average rr per_min=12438.73920939373585093414
average rr per_hour=746324.35256362415105604895

Judging from the graph below, we see that most of the requests take well below the average mark. Profiling our solution, we’ve found the reason for the anomalies (above 10ms) to be caused by GoVPP itself when waiting on the reply from VPP. This behavior is well documented on GoVPP wiki. We can conclude our solution closely mirrors the performance  of the synchronous GoVPP APIs.

Here is the unary RPC total time in ms:

 

In conclusion, we have introduced ourselves to the concepts of gRPC and have run our VPP API + GoVPP build, with a REST service feature. Furthermore, we have shown you our in-house solution VPP-RPC, which facilitates the connection between the API and GoVPP.

If you would like to inquire about this solution, please contact us for more information.

In the last part of this series, we will take a closer look at the gNMI service and how we can benefit from it.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

Vector Packet Processing 103: Ligato & VPP Agent

Welcome back to our guide on Vector Packet Processing. In today’s post number three from our VPP series, we will take a look at Ligato and the its VPP Agent.

Ligato is one of multiple commercially supported technologies supported by PANTHEON.tech.

What is a VNF?

A Virtual Network Function is a software implementation of a function. It runs on single or multiple Virtual Machines or Containers, on top of a hardware networking infrastructure. Individual functions of this network may be implemented or combined together, in order to create a complete networking communication service. A VNF can be used as a standalone entity or part of an SDN architecture.

Its life-cycle is controlled by orchestration systems, such as the increasingly popular Kubernets. Cloud-native VNFs and their control/management plane can expose REST or gRPC APIs to external clients, communicate over a message bus or provide a cloud-friendly environment for deployment and usage. It can also support high-performance data planes, such as VPP.

What is Ligato?

It is the open source cloud platform for building and wiring VNFs. Ligato provides infrastructure and libraries, code samples and CI/CD process to accelerate and improve the overall developer experience. It paves the way towards faster code reuse, reducing costs and increasing application agility & maintainability. Being native to the cloud, Ligato has a minimal footprint, plus can be easily integrated, customized and extended, deployed using Kubernetes. The three main components of Ligato are:

  • CN Infra – a Golang platform for developing cloud-native microservices. It can be used to develop any microservice, even though it was primarily designed for Virtual Network Function management/control plane agents.
  • SFC Controller – an orchestration module for data-plane connectivity within cloud-native containers. These containers may be VPP-Agent enabled or communicate via veth interfaces.
  • BGP Agent – a Border Gateway Protocol information provider. You can also view a Ligato demonstration done by PANTHEON.tech here.

The platform is modular-based – new plugins provide new functionality. These plugins can be setup in layers, where each layer can form a new platform with different services at a higher layer plane. This approach mainly aims to create a management/control plane for VPP, with the addition of the VPP Agent.

What is the VPP Agent?

The VPP Agent is a set of VPP-specific plugins that interact with Ligato, in order to access services within the same cloud. VPP Agent provides VPP functionality to client apps through a VPP model-driven API. External and internal clients can access this API, if they are running on the same CN-Infra platform, within the same Linux process. 

Quickstarting the VPP Agent

For this example, we will work with the pre-built Docker image.

Install & Run

  1. Run the downloaded Docker image:

    docker pull ligato/vpp-agent
    docker run -it --name vpp --rm ligato/vpp-agent
  2. Using agentctl, configure the VPP Agent:

    docker exec -it vpp agentctl -
  3. Check the configuration, using agentctl or the VPP console:

    docker exec -it vpp agentctl -e 172.17.0.1:2379 show
    docker exec -it vpp vppctl -s localhost:500

For a detailed rundown of the Quickstart, please refer to the Quickstart section of VPP Agents Github.

We have shown you how to integrate and quickstart the  VPP Agent, on top of Ligato.

Our next post will highlight gRPC/REST – until then, enjoy playing around with VPP Agent.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

Vector Packet Processing 102: Honeycomb & hc2vpp

Welcome to the second part of our VPP Introduction series, where we will talk about details of the Honeycomb project. Please visit our previous post on VPP Plugins & Binary API, which is used in Honeycomb to manage the VPP agent.

What is Honeycomb?

Honeycomb is a generic, data plane management agent and provides a framework for building specialized agents. It exposes NETCONF, RESTCONF and BGP as northbound interfaces.

Honeycomb runs several, highly functional sets of APIs, based in ODL, which are used to program the VPP platform. It leverages ODL’s existing tools and integrates several of its existing components (YANG Tools, MD-SAL, NETCONF/RESTCONF…).  In other words – it is a light on resources, bare bone version of OpenDaylight.

Its translation layer and data processing pipelines are classified as generic, which makes it extensible and usable not only as a VPP specific agent.

Honeycomb’s functionality can be split into two main layers:

  • Data Processing layer – pipeline processing for data from Northbound interfaces, towards the Translation layer
  • Translation layer – handles mainly configuration updates from data processing layer + reads and writes configuration-data
  • Plugins – extend Honeycombs usability

Honeycomb mainly acts as a bridge between VPP and the actual OpenDaylight SDN Controller:


Examples of VPP x Honeycomb integrations

We’ve already showcased several use cases on our Pantheon Technologies’ YouTube channel:

For the purpose of integrating VPP with Honeycomb, we will further refer to the project hc2vpp, which was directly developed for VPP usage.

What is hc2vpp?

This VPP specific build is called hc2vpp, which provides an interface (somewhere between a GUI and a CLI) for VPP. It runs on the same host as the VPP instance and allows to manage it off-the-box. This project is lead by Pantheons own Michal Čmarada.

Honeycomb was created due to a need for configuring VPP via NETCONF/RESTCONF. During the time it was created, NETCONF/RESTCONF was provided by ODL. Therefore, Honeycomb is based on certain ODL tools (data-store, YANG Tools, others). ODL as such uses an enormous variety of tools. Honeycomb was created as a separate project, in order to create a smaller footprint. After its implementation, it exists as a separate server and starts these implementations from ODL.

Later on, it was decided that Honeycomb should be split into a core instance, and hc2vpp would handle VPP related parts. The split also occurred, in order to provide the possibility of creating a proprietary device control agent. hc2vpp (Honeycomb to VPP) is a configuration agent, so that configurations can be sent via NETCONF/RESTCONF. It translates the configuration to low level APIs (called Binary APIs).

Honeycomb and hc2vpp can be installed in the same way as VPP, by downloading the repositories from GitHub. You can either:

Install Honeycomb

Install hc2vpp

For more information, please refer to the hc2vpp official project site.

In the upcoming post, we will introduce you to the Ligato VPP Agent.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

PANTHEON.tech is the 2nd largest OpenDaylight contributor for Q3/2018

In the last week of November 2018, Bitergia, a software development analytics company, published a report on the past and current status of the OpenDaylight project, which plays a significant role in PANTHEON.tech’s offerings and solutions.

PANTHEON.tech’s CTO Robert Varga, is leading the list of per-user-contributions to the source code of OpenDaylight, with over 980 commits to the source code in Q3 of 2018. This achievement further establishes PANTHEON.tech’s position as one of the largest contributors to the OpenDaylight project.

As for the list of companies which contribute to the source code of OpenDaylight, PANTHEON.tech is the 2nd largest contributor for Q3/2018, with 1034 commits. We were just 34 commits shy of the top contributor position, which belongs to Red Hat.

Due to ODL’s open-source nature, anyone can contribute to the project and improve it in the long-run. Any change that gets added to the source code is defined as a commit. These types of changes need an approval and can not be any type of automated activity – including bot-actions or merges. This means, that each single commit is a unique change, added to the source code.

PANTHEON.tech will continue its commitment to improving OpenDaylight and we are looking forward to being a part of the future of this project.

What is OpenDaylight?

ODL is a collaborative open source project aimed to speed up the adoption of SDN and create a solid foundation for Network Functions Virtualization (NFV).

PANTHEON.tech’s nurture for ODL goes back when it was forming. In a sense, PANTHEON.tech has led the way is and how an SDN controller is and should be. This requires dedication, which was proven over the years with the extensive amount of contribution thanks to its expert developers.

Click here if you are interested in our solutions, which are based on, or integrate ODL’s framework.


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

PANTHEON.tech @ Huawei Connect 2018 in Shanghai

PANTHEON.tech visited Shanghai last week to attend the third annual Huawei Connect. Martin Varga shares some insights from the event.

Activate Intelligence

This year’s theme was Activate Intelligence. Huawei outlined its broad strategy to bring artificial intelligence (AI) to the masses in applications in manufacturing, autonomous driving, smart cities, IoT and other areas. AI will enable billions of new devices to be connected and transfer big data over the network.

The conference was held in Shanghai, China, at the Shanghai World Expo Exhibition Center. Huawei has put a lot of resources and effort into organizing the event which has shown its direct impact on over 26,000 in the attendance. The conference was organized perfectly, to the last detail (exhibition areas, keynotes and conference area, chill-out zones, etc.).

We have witnessed the demonstrations of various smart technologies, ranging from smart city applications, smart education, smart transportation. Smart everything.

One of the most impressive technology demonstrations was an AI that was able to translate Chinese to English and vice versa, as good as a human translator could. Microsoft in cooperation with Huawei states:

“Internal tests have shown, depending on the language, up to a 23 percent better offline translation quality over competing best-in-class offline packs.”

Huawei is also building an AI ecosystem of partners, that is targeted to exceed 1 million developers over the next three years with US$140m. 

Networking

We have had some interesting meetings with Huawei’s representatives. It was very pleasant to learn about Huawei’s visions for the near future. We are also glad to share the same vision for an exciting future. Huawei invests heavily into researching new technologies such as AI or IoT, in order to define practical use-cases that can be deployed into their product’s portfolio.

PANTHEON.tech, as a software development company, is strongly focused on computer networking which is related to the Huawei’s vision to integrate AI into managing network operations.

Mr. Yang Jin  Director, Network Data Analytics Research  Huawei Technologies Co., stated: 

“Artificial Intelligence and Machine Learning will abstract data to make next-generation communication breakthroughs come to life.”

Feel free to contact PANTHEON.tech if you have interest in any of the AI, AR/VR, IoT, Intent Driven Network, SDN, NFV, Big Data, and related areas. We can talk about challenges and how we can solve them together.

Martin Varga

Technical Business Development Manager

lighty.io core becomes Open-Source

PANTHEON.tech, the proven supporter of open-source software and its communities, and leader in Software Defined Networking (SDN) and the OpenDaylight (ODL) platform, decided to open-source the core components of the current go-to SDN controller development kit: lighty.io.

Last Open Networking Summit, USA witnessed the announcement of lighty.io. This ONS will be the place where all OpenDaylight users will remember.

Today, PANTHEON.tech continues to push forward the open-source community projects and its commercial applications developed on open-source software, especially the lighty.io core.

What is lighty.io?

lighty.io journey started when we realized the way to move the upstream project to the direction we envisioned was to provide a solution rather than talk about it. Therefore we’ve focused on a solution of the biggest pain point of current ODL, and turned it into the idea-bearer of the whole product. We’ve taken the biggest obstacle – Apache Karaf, out of the ODL developers lives, and hence improving their efficiency tremendously.

On the other hand the lighty.io commercial product is available for purchase is developed using the same lighty.io core which gets open-sourced today. We are tirelessly adding new features with the demand and cooperation with our customers. Such as the lighty.io Network Topology Visualisation Component, Go/Java/Python RESTCONF clients, improved RESTCONF notifications with HTTP 2.0 support, improved southbound NETCONF plugin with the implemented support of YANG actions, NETCONF simulator to name a few.

Our vision from the start was to enhance the commercial version of lighty.io with bleeding edge features and improvements that are not yet in open-source ODL. This will leverage lighty.io users capability to speed up their development and deployment. Today lighty.io users will get the chance to experience these changes, instead of waiting until such improvements appear in the upstream or developing themselves.

PANTHEON.tech released lighty.io core components under Eclipse Public License v1.0 as a continuous support to the community.

Future of SDN

We believe this generous act will result in spinning up new projects based on OpenDaylight components making them faster and cheaper to develop and will give you a competitive edge in today’s fast evolving world of micro service and cloud-oriented deployments.

We encourage all OpenDaylight users, Data Center managers, Telco and Service Provider DevOps to give lighty.io a try with their existing applications. It will be amaze you.

Today, PANTHEON.tech will officially release the lighty.io core as free and open-source software at the event. Please join us in the Open Networking Summit Europe/Amsterdam at booth #14.

There is more to come, as we will reveal how lighty.io cuts down the software development costs and time, helping our customers to consume complex data center management features with ease.

Stay tuned and watch this space for the other great announcements around lighy.io which will take space in the event. Be there to witness the amazing.

lighty.io powers datacenter management at kaloom.com


Complete automation and full forwarding plane programmability

Private data centers are the hot topic for companies and enterprises who are not willing to push all the data into public clouds. Kaloom Software Defined Fabric™ (Kaloom SDF) is the world’s first fully programmable, automated, software-based data center fabric capable of running VNFs efficiently at scale. This is the first data center networking fabric on the market that provides complete automation and full forwarding plane programmability.

 approached PANTHEON.tech last year, knowing Pantheon’s intensive and long involvement in SDN, particularly iOpenDaylight project. OpenDaylight (ODL) is a modular open platform for orchestrating and automating networks of any size and scale. The OpenDaylight platform arose out of the SDN movement, in which PANTHEON.tech has expertise and experience. Hence, it was a logical step to utilize this expertise in this project and leverage what has already been done.

Traditional ODL based controller design was not suitable for this job because of bulkiness of the Karaf based deployments. Kaloom requested a modern web UI which vanilla ODL platform does not provide. lighty.io as a component library provides an opportunity to run ODL services such as: MD-SAL, NETCONF and YANG Tools in any modern web server stack, and integration with other components like MongoDB

Architecture

The following architecture is starting to be like a blueprint for SDN applications today. We utilize the best of both worlds:

  1. MD-SAL, NETCONF and YANG Tools from ODL
  2. Updated modern web stack Jetty/Jersey and
  3. MongoDB as a persistent data store.

 

This is how Kaloom Fabric Manager (KFM) project has started. After several months of  customizing development, we have deployed a tailored web application which provides management UI for Kaloom SDF. We have changed and tailored our Visibility Package application to suit Kaloom’s requirements and specifics. This specialized version uses the name of KFM. The architecture diagram above shows details/internals of the KFM and how we interconnect with Kaloom’s proprietary Fabric Manager/Virtual Fabric Manager controller devices.

The solution for physical data centers

lighty.io based back-end of the KFM with NETCONF plugin provides REST services to the Angular UI, which is using our Network Topology Visualization Component for the better topology view visualization and user experience. Using these REST endpoints, it is easy to send specific NETCONF RPC to the Kaloom SDF controllers.

While working on this next-gen Data Center Infrastructure Management software, we have realized that integrating all moving parts of the system is a crucial step for final delivery. Since different teams were working on different parts, it was crucial we could isolate the lighty.io part of the system and adapt it to the Kaloom SDF as much as possible. We have used our field-tested NETCONF device simulator from our lighty.io package to deliver the software which was tested thoroughly to provide stability of the KFM UI.

Kaloom SDF provides a solution for physical data centers administrated by Data Center Infrastructure Provider (DCIP) users. A physical data center can be easily sliced to virtual data centers offered to customers, called virtual Data Center Operator (vDCO) users. The DCIP user can monitor and configure the physical fabrics – PODs of the data center. KFM WEB UI shows the fabrics in topology view and allows updating the attributes of fabric and fabric nodes.

Topology View of Fabric Manager

Advantages for DCIP

The main task of DCIP user is to slice the fabrics to virtual data centers and virtual fabrics. This process involves choosing servers through associated termination points and associating them with the newly created virtual fabric manager controller. Server resources are used through the virtual fabric manager by vDCO users.

vDCO users can use the server resources and connect them via network management of their virtual data center. vDCO can attach server ports to the switches with proper encapsulation settings. After the switch is ready, vDCO can create a router and attach switches to it. The router offers different configuration possibilities to follow vDCO user’s needs: L3 interface configuration, static routing, BGP routing, VXLANs and many more. KFM offers also topology view of virtual data center network, so you can check relations between servers, switches, and routers.

Topology View of Fabric Manager

For more details about the KFM UI in action, please see the demo video with NETCONF simulator of Kaloom SDF bellow, or visit kaloom or kaloom academy

 

 

lighty.io runs 5G on xRAN

In April 2018, the xRAN forum released the Open Fronthaul Interface Specification. The first specification made publicly available from xRAN since its launch in October 2016. The released specification has allowed a wide range of vendors to develop innovative, best-of-breed remote radio unit/head (RRU/RRH) for a wide range of deployment scenarios, which can be easily integrated with virtualized infrastructure & management systems using standardized data models.

This is where PANTHEON.tech came to the scene. We became one of the first companies to introduce full stack 5G compliant solution with this specification.

Just a few days spent coding and utilizing the readily available lighty.io components, we created a Radio Unit (RU) simulator and an SDN controller to manage a group of Radio Units.

Now, let us inspect the architecture and elaborate on some important details.

We have used lighty.io, specifically the generic NETCONF simulator, to set up an xRAN Radio Unit (RU) simulator. xRAN specifies YANG models for 5G Radio Units. lighty.io NETCONF device library is used as a base which made it easy to add custom behavior and 5G RU is ready to stream data to a 5G controller.

The code in the controller pushes the data collected from RUs into Elasticsearch for further analysis. RU device emits the notifications of simulated Antenna Line Devices connected to RU containing:

  • Measured Rx and Tx input power in mW
  • Tx Bias Current in mA (Internally measured)
  • Transceiver supply voltage in mV (Internally measured)
  • Optional laser temperature in degrees Celsius. (Internally measured)

*We used device xRAN-performance-management model for this purpose.

lighty.io as a 5G controller

With lighty.io we created an OpenDaylight based SDN controller that can connect to RU simulators using NETCONF. Once RU device is connected, telemetry data is pushed via NETCONF notifications to the controller, and then directly into Elasticsearch.
Usually, log stash is required to upload data into Elasticsearch. In this case, it is the 5G controller that is pushing device data directly to Elasticsearch using time series indexing.
On Radio Unit device connect event, monitoring process automatically starts. RPC-ald-communication is called on RU device collecting statistics for:

  • The Number of frames with incorrect CRC (FCS) received from ALD – running counter
  • The Number of frames without stop flag received from ALD – running counter
  • The number of octets received from HDLC bus – running counter

*We used xran-ald.yang model for this purpose.
The lighty.io 5G controller is also listening to notifications from the RU device mentioned above.

Elasticsearch and Kibana

Data collected by the lighty.io 5G controller via RPC calls and notifications are pushed directly into Elasticsearch indices. Once indexed, Elasticsearch provides a wide variety of queries upon stored data.
Typically, we can display several faulty frames received from “Antenna Line Devices” over time, or analyze operational parameters of Radio Unit devices like receiving and transmitting input power.
Such data are precious for Radio Unit setup, so the control plane feedback loop is possible.

By adding Elasticsearch into the loop, data analytics or the feedback loop became ready to perform complex tasks. Such as: Faulty frame statistics from the “Antenna Line Devices” or the  Radio Unit operational setup

How do we see the future of xRAN with lighty.io?

The benefit of this solution is a full stack xRAN test. YANG models and its specifications are obviously not enough considering the size of the project. With lighty.io 5G xRAN, we invite the Radio Unit device vendors and 5G network providers to cooperate and build upon this solution. Having the Radio Unit simulators available and ready allows for quick development cycle without being blocked by the RU vendor’s bugs.

lighty.io has been used as a 5G rapid application development platform which enables quick xRAN Radio Unit monitoring system setup.
We can easily obtain xRAN Radio Unit certification against ‘lighty.io 5G controller’ and provide RU simulations for the management plane.

Visit lighty.io page, and check out our GitHub for more details.

Meet Robert Varga – PANTHEON.tech @ ONS 2018

Robert Varga is PANTHEON.tech’s  Chief Technology Officer who has almost two decades of Information Technology Industry experience ranging from being a C code monkey, through various roles in telecommunications’ IT operations to architecting bleeding edge software platforms.

Robert has a deep expertise in Software Defined Networking, its applications and the OpenDaylight platform.

Within those decades, some of the technologies he had experience are: C/C++, Java, Python, various UNIX-like systems and database systems.

Robert has a very strong background in design, development, deployment, and administration of large-scale platforms with the primary focus on high availability and security.

Robert has been involved in OpenDaylight from its start, architecting, designing and implementing the MD-SAL. He is the most prolific OpenDaylight contributor and a member of the OpenDaylight Technical Steering Committee, representing kernel projects. His code contributions revolve around key infrastructure components, such as YANG Tools, MD-SAL and Clustered Data Store. He also designed and implemented the first versions of the BGP and PCEP plugins.

 

Source: Bitergia Analytics

Until today, Robert Varga had made 11,368 commits in 66 ODL projects over the course of ODL’s lifespan. That is 621,236 added and 524,842 removed lines of code and that translates roughly around 12 great novels written in </code>. ODL continues to be a great example of what an open-source software is and how international contributors can collaborate beautifully to create the next great thing.  There are currently only 13 TCLs in ODL who help steer the project forward and lead the ODL to be the most successful SDN controller in the world. He is proudly one of the ODL Technical Steering Committee Members and a committer to a range of projects.

The all-time top contributor of ODL  Robert Varga, Chief Technology Officer of PANTHEON.tech makes the company proud to be among the top contributor of such innovative, successful project.

Robert shares the PANTHEON.tech’s ambition to create the biggest and most successful open-source Software Defined Networking (SDN) controller in the world.

Robert will be available to share his deep expertise in the field and representing PANTHEON.tech the silver sponsor of Open Networking Summit Europe which will take place on Sep 25-27 in Amsterdam.

Join Robert at PANTHEON.tech’s booth #14 in the event to get a glimpse of the Software Defined Networking future.

 

Meet Štefan Kobza – PANTHEON.tech @ ONS 2018

 

Štefan Kobza is Chief Operation Officer at PANTHEON.tech with almost two decades of professional Information Technology industry experience.

He started his career as a software developer and built his way up. Štefan has worked on different projects ranging from developing proprietary code for major networking vendors to rating and billing services mainly in computer networking and telecommunication industries. All of his coding experiences helped Štefan to understand his game and stay on top of the projects he is involved and for PANTHEON.tech’s current customers to provide the highest quality engineering services.

Štefan has taken part in the process of defining some IETF RFCs ¹ ², which was an experience on its own.

Lately, he had developed a special interest for open-sourced projects and its communities.

Understanding the daily life of a software developer, Štefan helps in building an ever growing and ever improving environment that helps PANTHEON.tech provide the highest quality engineering services.

You can find his published articles and activities on his LinkedIn profile.

Besides leading day-to-day business in PANTHEON.tech, Štefan also watches closely the development around OpenDaylight, OPNFV, ONAP, FD.IO, Sysrepo, Contiv, Ligato, and others.

Find Štefan representing PANTHEON.tech the silver sponsor, at booth #14 in the Open Networking Summit Europe to exchange ideas and to get a glimpse of the Software Defined Networking future.

Save the date and get your tickets. #ons2018, Amsterdam Sep 25-27

PANTHEON.tech releases lighty.io version 9.0

OpenDaylight Fluorine 9 LogoPANTHEON.tech is proud to announce the release of lighty.io 9.0 following the official  OpenDaylight Fluorine release.

lighty.io has been adapted to reflect the latest upstream changes and made fully compatible with.

Check out our latest lighty.io release on our GitHub account.

Here are some noteworthy improvements what OpenDaylight Fluorine established:

  •  Yangtools cleanup and refactoring.
  •  Streamlined generated Yang module APIs.
  •  Improved Java bindings.
  •  NETCONF and RESTCONF improvements.

The biggest ODL improvement is the new set of core services provided by the MD-SAL project. Older services provided previously by the controller project have been marked as dprecated and will be removed in future ODL/lighty.io releases.

lighty.io provides new MD-SAL services as well as deprecated controller implementations.

Please see lighty.io Services for reference.

If your application uses any of the deprecated marked services, you should consider refactoring. Contact us for any troubleshooting requirements.

In addition to the latest ODL improvements, lighty.io has more to offer:

  • Up-to-date web server Jetty 9.4.11.v20180605 with better HTTP2 support.
  • RESTCONF implementations are now in compliance with HTTP2.
  • YANG actions implementation as it was defined in RFC 7650.
  • gNMI / OpenConfig south-bound plugin.
  • Minor changes leading the controller to startup faster
  • Improved Javadoc for main APIs.
  • The easier pathway towards JDK 11 adoption.
  • Spring.io dependency injection integration.
  • And many more, please check them out on the lighty.io web.

List of new and deprecated services:

List of new MD-SAL services:  List of deprecated services:
org.opendaylight.mdsal.binding.api.DataBroker

org.opendaylight.mdsal.binding.api.MountPointService

org.opendaylight.mdsal.binding.api.NotificationPublishService

org.opendaylight.mdsal.binding.api.NotificationService

org.opendaylight.mdsal.binding.api.RpcProviderService

org.opendaylight.mdsal.binding.dom.codec.api.BindingCodecTreeFactory

org.opendaylight.mdsal.binding.dom.codec.api.BindingNormalizedNodeSerializer

org.opendaylight.mdsal.dom.api.DOMDataBroker

org.opendaylight.mdsal.dom.api.DOMDataTreeService

org.opendaylight.mdsal.dom.api.DOMDataTreeShardingService

org.opendaylight.mdsal.dom.api.DOMMountPointService

org.opendaylight.mdsal.dom.api.DOMNotificationPublishService

org.opendaylight.mdsal.dom.api.DOMNotificationService

org.opendaylight.mdsal.dom.api.DOMRpcProviderService

org.opendaylight.mdsal.dom.api.DOMRpcService

org.opendaylight.mdsal.dom.api.DOMSchemaService

org.opendaylight.mdsal.dom.api.DOMYangTextSourceProvider

org.opendaylight.mdsal.dom.spi.DOMNotificationSubscriptionListenerRegistry

org.opendaylight.controller.sal.binding.api.NotificationProviderService

org.opendaylight.controller.sal.binding.api.RpcProviderRegistry

org.opendaylight.controller.md.sal.dom.spi.DOMNotificationSubscriptionListenerRegistry

org.opendaylight.controller.md.sal.dom.api.DOMMountPointService

org.opendaylight.controller.md.sal.dom.api.DOMNotificationPublishService

org.opendaylight.controller.md.sal.dom.api.DOMNotificationService

org.opendaylight.controller.md.sal.dom.api.DOMDataBroker

org.opendaylight.controller.md.sal.dom.api.DOMRpcService

org.opendaylight.controller.md.sal.dom.api.DOMRpcProviderService

org.opendaylight.controller.md.sal.binding.api.MountPointService

org.opendaylight.controller.md.sal.binding.api.NotificationService

org.opendaylight.controller.md.sal.binding.api.DataBroker

org.opendaylight.controller.md.sal.binding.api.NotificationPublishService

 lighty.io by PANTHEON.tech

Meet Juraj Veverka – PANTHEON.tech @ ONS 2018

Juraj Veverka is PANTHEON.tech’s Technical Leader and Solution Design Architect who has over 15 years of technical experience. Juraj has a deep expertise in design and implementation of Software Defined Networking applications using the OpenDaylight platform. Some of the technologies he uses from his toolbox are Java, Python, JEE7, SpringBoot, GRPC, OpenDaylight, ONAP, Yang, Netconf, Restconf, OpenFlow, BGP, FD.io, C/C++, HTML.

Juraj has a Telecommunication and Electrical Engineering degree from the University of Zilina, Slovakia. Juraj shares the company’s mission to create the best development stack for SDN applications and has a passion for Java and embedded platforms.

You can find his published articles on his LinkedIn profile and see what he is working on, on GitHub.

Juraj will be available to share his deep expertise in the field and representing PANTHEON.tech the silver sponsor of Open Networking Summit Europe which is going to take place on Sep 25-27 in Amsterdam.

Find PANTHEON.tech at booth #14 in the event to get a glimpse of the Software Defined Networking future.

Meet Miroslav Miklus – PANTHEON.tech @ ONS 2018

Miroslav Mikluš is the vice president of engineering department at PANTHEON.tech.

Since he has had graduated from Slovak Technical University in Bratislava, Miroslav had almost two decades of Information Technologies experience mainly in the Slovak Republic tech companies with titles as Director of Engineering, Head of Information Technology Services division, Senior Technical Leader, etc.

He started his career as a software developer and had climbed many steps of the tech industry to reach where he is now.

Miroslav has helped to develop and deliver dozens of successful software developments and integration projects mostly for the computer networking industry.

He loves to play basketball and running.

You might remember him from the last Open Networking Summit North America with his presentation named ‘Ligato as a Golang integration platform of a BGP daemon with VPP.”

Miroslav will be representing PANTHEON.tech the silver sponsor of Open Networking Summit Europe which is going to take place on Sep 25-27 in Amsterdam. Be sure to be there to meet with him in person in PANTHEON.tech’s booth #14 or the on 5k Fun Run on September 26.