Vector Packet Processing 103: Ligato & VPP Agent

Vector Packet Processing 103: Ligato & VPP Agent

Welcome back to our guide on Vector Packet Processing. In today’s post number three from our VPP series, we will take a look at Ligato and the its VPP Agent.

Ligato is one of multiple commercially supported technologies supported by

What is a VNF?

A Virtual Network Function is a software implementation of a function. It runs on single or multiple Virtual Machines or Containers, on top of a hardware networking infrastructure. Individual functions of this network may be implemented or combined together, in order to create a complete networking communication service. A VNF can be used as a standalone entity or part of an SDN architecture.

Its life-cycle is controlled by orchestration systems, such as the increasingly popular Kubernets. Cloud-native VNFs and their control/management plane can expose REST or gRPC APIs to external clients, communicate over a message bus or provide a cloud-friendly environment for deployment and usage. It can also support high-performance data planes, such as VPP.

What is Ligato?

It is the open source cloud platform for building and wiring VNFs. Ligato provides infrastructure and libraries, code samples and CI/CD process to accelerate and improve the overall developer experience. It paves the way towards faster code reuse, reducing costs and increasing application agility & maintainability. Being native to the cloud, Ligato has a minimal footprint, plus can be easily integrated, customized and extended, deployed using Kubernetes. The three main components of Ligato are:

  • CN Infra – a Golang platform for developing cloud-native microservices. It can be used to develop any microservice, even though it was primarily designed for Virtual Network Function management/control plane agents.
  • SFC Controller – an orchestration module for data-plane connectivity within cloud-native containers. These containers may be VPP-Agent enabled or communicate via veth interfaces.
  • BGP Agent – a Border Gateway Protocol information provider. You can also view a Ligato demonstration done by here.

The platform is modular-based – new plugins provide new functionality. These plugins can be setup in layers, where each layer can form a new platform with different services at a higher layer plane. This approach mainly aims to create a management/control plane for VPP, with the addition of the VPP Agent.

What is the VPP Agent?

The VPP Agent is a set of VPP-specific plugins that interact with Ligato, in order to access services within the same cloud. VPP Agent provides VPP functionality to client apps through a VPP model-driven API. External and internal clients can access this API, if they are running on the same CN-Infra platform, within the same Linux process. 

Quickstarting the VPP Agent

For this example, we will work with the pre-built Docker image.

Install & Run

  1. Run the downloaded Docker image:

    docker pull ligato/vpp-agent
    docker run -it --name vpp --rm ligato/vpp-agent
  2. Using agentctl, configure the VPP Agent:

    docker exec -it vpp agentctl -
  3. Check the configuration, using agentctl or the VPP console:

    docker exec -it vpp agentctl -e show
    docker exec -it vpp vppctl -s localhost:500

For a detailed rundown of the Quickstart, please refer to the Quickstart section of VPP Agents Github.

We have shown you how to integrate and quickstart the  VPP Agent, on top of Ligato.

Our next post will highlight gRPC/REST – until then, enjoy playing around with VPP Agent.

You can contact us at

Explore our Pantheon GitHub.

Watch our YouTube Channel.

Vector Packet Processing 102: Honeycomb & hc2vpp

Vector Packet Processing 102: Honeycomb & hc2vpp

Welcome to the second part of our VPP Introduction series, where we will talk about details of the Honeycomb project. Please visit our previous post on VPP Plugins & Binary API, which is used in Honeycomb to manage the VPP agent.

What is Honeycomb?

Honeycomb is a generic, data plane management agent and provides a framework for building specialized agents. It exposes NETCONF, RESTCONF and BGP as northbound interfaces.

Honeycomb runs several, highly functional sets of APIs, based in ODL, which are used to program the VPP platform. It leverages ODL’s existing tools and integrates several of its existing components (YANG Tools, MD-SAL, NETCONF/RESTCONF…).  In other words – it is a light on resources, bare bone version of OpenDaylight.

Its translation layer and data processing pipelines are classified as generic, which makes it extensible and usable not only as a VPP specific agent.

Honeycomb’s functionality can be split into two main layers:

  • Data Processing layer – pipeline processing for data from Northbound interfaces, towards the Translation layer
  • Translation layer – handles mainly configuration updates from data processing layer + reads and writes configuration-data
  • Plugins – extend Honeycombs usability

Honeycomb mainly acts as a bridge between VPP and the actual OpenDaylight SDN Controller.

Examples of VPP x Honeycomb integrations

We’ve already showcased several use cases on our Pantheon Technologies’ YouTube channel:

For the purpose of integrating VPP with Honeycomb, we will further refer to the project hc2vpp, which was directly developed for VPP usage.

What is hc2vpp?

This VPP specific build is called hc2vpp, which provides an interface (somewhere between a GUI and a CLI) for VPP. It runs on the same host as the VPP instance and allows to manage it off-the-box. This project is lead by Pantheons own Michal Čmarada.

Honeycomb was created due to a need for configuring VPP via NETCONF/RESTCONF. During the time it was created, NETCONF/RESTCONF was provided by ODL. Therefore, Honeycomb is based on certain ODL tools (data-store, YANG Tools, others). ODL as such uses an enormous variety of tools. Honeycomb was created as a separate project, in order to create a smaller footprint. After its implementation, it exists as a separate server and starts these implementations from ODL.

Later on, it was decided that Honeycomb should be split into a core instance, and hc2vpp would handle VPP related parts. The split also occurred, in order to provide the possibility of creating a proprietary device control agent. hc2vpp (Honeycomb to VPP) is a configuration agent, so that configurations can be sent via NETCONF/RESTCONF. It translates the configuration to low level APIs (called Binary APIs).

Honeycomb and hc2vpp can be installed in the same way as VPP, by downloading the repositories from GitHub. You can either:

Install Honeycomb

Install hc2vpp

For more information, please refer to the hc2vpp official project site.

In the upcoming post, we will introduce you to the Ligato VPP Agent.

You can contact us at

Explore our Pantheon GitHub.

Watch our YouTube Channel.

Vector Packet Processing 101: VPP Plugins & Binary API

Vector Packet Processing 101: VPP Plugins & Binary API

In the first part of our new series, we will be building our first VPP platform plug-in, using basic examples. We will start with a first-dive into plugin creation and finish with introducing VAPI into this configuration.

If you do not know what VPP is, please visit our introductory post regarding VPP and why you should consider using it.

Table of contents:

  • How to write a new VPP Plugin
    • 1. Preparing your new VPP plugin
    • 2. Building & running your new plugin
  • How to create new API messages
  • How to call the binary API
    • Additional C/C++ Examples

How to write a new VPP Plugin

The principle of VPP is, that you can plug in a new graph node, adapt it to your network purposes and run it right off the bat. Including a new plugin does not mean, you need to change your core-code with each new addition. Plugins can be either included in the processing graph, or they can be built outside the source tree and become an individual component in your build.

Furthermore, this separation of plugins makes crashes a matter of a simple process restart, which does not require your whole build to be restarted because of one plugin failure.

1. Preparing your new VPP plugin

The easiest way how to create a new plugin that integrates with VPP is to reuse the sample code at “src/examples/sample-plugin”. The sample code implements a trivial “macswap” algorithm that demonstrates the plugins run-time integration with the VPP graph hierarchy, API and CLI.

  • To create a new plugin based on the sample plugin, copy and rename the sample plugin directory

cp -r src/examples/sample-plugin/sample src/plugins/newplugin

#replace 'sample' with 'newplugin'. as always, take extra care with sed!
cd src/plugins/newplugin
fgrep -il "SAMPLE" * | xargs sed -i.bak 's/SAMPLE/NEWPLUGIN/g'
fgrep -il "sample" * | xargs sed -i.bak 's/sample/newplugin/g'
rm *.bak*
rename 's/sample/newplugin/g' *

There are the are following files:

    • node.c – implements functionality of this graph node (swap source and destination address) -update according to your requirements.
    • newplugin.api – defines plugin’s API, see below
    • newplugin.c, newplugin_test.c – implements plugin functionality, API handlers, etc.
  • Update CMakeLists.txt in newplugin directory to reflect your requirements:




  COMPONENT vpp-plugin-newplugin
  • Update sample.c to hook your plugin into the VPP graph properly:
VNET_FEATURE_INIT (newplugin, static) = 
 .arc_name = "device-input",
 .node_name = "newplugin",
 .runs_before = VNET_FEATURES ("ethernet-input"),
  • Update newplugin.api to define your API requests/replies. For more details see “API message creation” below.
  • Update node.c to do required actions on input frames, such as handling incoming packets and more

2. Building & running your new plugin

  • Build vpp and your plugin. New plugins will be built and integrated automatically, based on the CMakeLists.txt
make rebuild
  • (Optional) Build & install vpp packages for your platform
make pkg-deb
cd build-root
sudo dpkg -i *.deb
  • The binary-api header files you can include later are located in build-root/build-vpp_debug-native/vpp/vpp-api/vapi
    •  If vpp is installed, they are located in /usr/include/vapi
  • Run vpp and check whether your plugin is loaded (newplugin has to be loaded and listed using the show plugin CLI command)
make run
load_one_plugin:189: Loaded plugin: (Network Address Translation)
load_one_plugin:189: Loaded plugin: (Sample VPP Plugin)
load_one_plugin:189: Loaded plugin: (network delay simulator plugin)
DBGvpp# show plugins
 Plugin Version Description
 1. 19.01-rc0~144-g0c2319f Inbound OAM
 x. 1.0 Sample VPP Plugin

How to create new API messages

API messages are defined in *.api files – see src/vnet/devices/af_packet.api, src/vnet/ip/ip.api, etc. These API files are used to generate corresponding message handlers. There are two types of API messages – non-blocking and blocking. These messages are used to communicate with the VPP Engine to configure and modify data path processing.

Non-blocking messages use one request and one reply message. Message replies can be auto-generated, or defined manually. Each request contains two mandatory fields – “client-index” and “context“, and each reply message contains mandatory fields – “context” and “retval“.

  • API message with auto-generated reply

autoreply define ip_table_add_del
 u32 client_index;
 u32 context;
 u32 table_id;
  • API message with manually defined reply
define ip_neighbor_add_del
 u32 client_index;
 u32 context;
 u32 sw_if_index;
define ip_neighbor_add_del_reply
 u32 context;
 i32 retval;
 u32 stats_index;

Blocking messages use one request and series of replies defined in *.api file. Each request contains two mandatory fields – “client-index” and “context“, and each reply message contains mandatory field – “context“.

  • Blocking message is defined using two structs – *-dump and *_details

define ip_fib_dump
 u32 client_index;
 u32 context;
define ip_fib_details
 u32 context;

Once you define a message in an API file, you have to define and implement the corresponding handlers for given request/reply message. These handlers are defined in one of component/plugin file and they use predefined naming – vl_api_…_t_handler – for each API message.

Here is an example for existing API messages (you can check it in src/vnet/ip component):

#define foreach_ip_api_msg \
_(IP_FIB_DUMP, ip_fib_dump) \
_(IP_NEIGHBOR_ADD_DEL, ip_neighbor_add_del) \
static void vl_api_ip_neighbor_add_del_t_handler (vl_api_ip_neighbor_add_del_t * mp, vlib_main_t * vm)
static void vl_api_ip_fib_dump_t_handler (vl_api_ip_fib_dump_t * mp)
 send_ip_fib_details (am, reg, fib_table, pfx, api_rpaths, mp->context);

Request and reply handlers are usually defined in api_format.c (or in plugin). Request uses a predefined naming – api_… for each API message and you have to also define help for each API message :

static int api_ip_neighbor_add_del (vat_main_t * vam)
  /* Construct the API message */
  /* send it... */
  S (mp);
  /* Wait for a reply, return good/bad news */
  W (ret);
  return ret;
static int api_ip_fib_dump (vat_main_t * vam)
  M (IP_FIB_DUMP, mp);
  S (mp);
  /* Use a control ping for synchronization */
  MPING (CONTROL_PING, mp_ping);
  S (mp_ping);
  W (ret);
  return ret;
#define foreach_vpe_api_msg \
_(ip_neighbor_add_del, \
 "(<intfc> | sw_if_index <id>) dst <ip46-address> " \
 "[mac <mac-addr>] [vrf <vrf-id>] [is_static] [del]") \
_(ip_fib_dump, "") \

Replies can be auto-generated or manually defined.

  • auto-generated reply using define foreach_standard_reply_retval_handler, with predefined naming
  • manually defined reply with details

How to call the binary API

In order to call the binary API, we will introduce VAPI to our configuration.

VAPI is the high-level C/C++ binary API. Please refer to src/vpp-api/vapi/ for details.

VAPI’s multitude of advantages include:

  • All headers in a single place – /usr/include/vapi => simplifies code generation
  • Hidden internals – one no longer has to care about message IDs, byte-order conversion
  • Easier binapi calls – passing user provided context between callbacks

We can use the following C++ code to call our new plugins’s binary API.

#include <cstdlib>
#include <iostream>
#include <cassert>

//necessary includes & macros
#include <vapi/vapi.hpp>
#include <vapi/vpe.api.vapi.hpp>

//include the desired modules / plugins
#include <vapi/newplugin.api.vapi.hpp>

using namespace vapi;
using namespace std;

//parameters for connecting
static const char *app_name = "test_client";
static const char *api_prefix = nullptr;
static const int max_outstanding_requests = 32;
static const int response_queue_size = 32;

#define WAIT_FOR_RESPONSE(param, ret)      \
  do                                       \
    {                                      \
      ret = con.wait_for_response (param); \
    }                                      \
  while (ret == VAPI_EAGAIN)

//global connection object
Connection con;

void die(int exit_code)
    //disconnect & cleanup
    vapi_error_e rv = con.disconnect();
    if (VAPI_OK != rv) {
        fprintf(stderr, "error: (rc:%d)", rv);


int main()
    //connect to VPP
    vapi_error_e rv = con.connect(app_name, api_prefix, max_outstanding_requests, response_queue_size);

    if (VAPI_OK != rv) {
        cerr << "error: connecting to vlib";
        return rv;

        Newplugin_macswap_enable_disable cl(con);

        auto &mp = cl.get_request().get_payload();

        mp.enable_disable = true;
        mp.sw_if_index = 5;

        auto rv = cl.execute ();
        if (VAPI_OK != rv) {
            throw exception{};

        WAIT_FOR_RESPONSE (cl, rv);
        if (VAPI_OK != rv) {
            throw exception{};

        //verify the reply
        auto &rp = cl.get_response ().get_payload ();
        if (rp.retval != 0) {
            throw exception{};
    catch (...)
        cerr << "Newplugin_macswap_enable_disable ERROR" << endl;


Additional C/C++ Examples

Furthermore, you are encouraged to try the minimal VAPI example provided in This example creates a loopback interface, assigns it an IPv4 address and then prints the address.
Follow these steps:

  • Install VPP
  • Extract the archive, build & run examples
mkdir build; cd build
cmake ..

#c example
sudo ./vapi_minimal
#c++ example
sudo ./vapi_minimal_cpp

In conclusion, we have:

  • successfully built and ran our first VPP plugin
  • created and called an API message in VPP

Our next post will introduce and highlight the key reasons, why you should consider Honeycomb/hc2vpp in your VPP build.

You can contact us at

Explore our Pantheon GitHub. Follow us on Twitter.

Watch our YouTube Channel. @ 2020 Vision Executive Summit in Lisbon

I was sent to Lisbon by, in order to attend the annual 2020 Vision Executive Summit. Here are my experiences and insights into this event.

The 2020 Vision Executive Summit, presented by Light Reading, was focused mainly on the pending revolution of 5G networks, automation, Edge Computing, IoT, security, etc. It hosts a variety of vendors and service providers, who provide insights into the telecom industry’s current trends and emerging topics.

Themes of the summit

In case of 5G, we have seen a huge opportunity in discussing’s future involvement and plans in this revolution. The challenges surrounding 5G were discussed by industry leaders with hands-on experience. This was beneficial, since we were confronted with the harsh reality of 5G. Due to it being a technology in-progress, many questions are still open. What will be the use-cases? What should we prepare for and when?

Nobody really knows how it may turn out, when it will become widely available for consumers, or if the world is even prepared for it. But it was a great opportunity to meet the people, whose bread and butter consists of building the future 5G network. It was an invaluable experience, to see a realistic view from industry-insiders and their perception of the future. It was a collective of equally-minded individuals and companies in the fields relevant to’s vision.

Another heavily discussed topic was security. While it is no secret that technology is successfully becoming an important part of our lives. Companies have to heavily rely on a defense against potential security threats and cyber attacks. Panels were held regarding the importance of security measures in expanding networks and the need for flexible and agile security solutions.

Subsequently, Edge Computing, which brings the distribution of data closer to the consumer, was also mentioned and discussed, in regards to its vulnerabilities and future. In this case, it was said with certainty that if you are the type of parent that plans your child’s future for them, make them study cyber security. The investment will return sooner than you could imagine.

Our experience at the summit

Our vision in attending this summit was to find out, if this summit is the right fit for us (spoiler alert – it was) check on the newest trends in our field and in which direction are they developing. The discussions were open and involved the real thoughts and views, without the PR and marketing stuff.

Lisbon is an interesting city, since it is more hidden from the eye of a classic tourist. It reminded me, in a way, of San Francisco. This was mainly due to trams riding uphill and the many uphill roads one has to take, in order to get somewhere. It was surprising though, that the city is making it a point to keep the original architecture of the city in tact and without major reconstructions.

As for the venue itself, the Intercontinental Hotel in Lisbon was nothing short of wonderful. Another highlight was the gala dinner. It was the perfect opportunity for casual networking, in the pompous and spectacular setting of Palacio de Xebregas. I have also experienced my first tuk-tuk ride, where I had to consider whether my life was worth the visit.

In conclusion – it was. I am looking forward to the new business-partners and connections has made at the 2020 Vision Executive Summit.

You can contact us at

Explore our Pantheon GitHub.

Watch our YouTube Channel. @ Huawei Connect 2018 in Shanghai visited Shanghai last week to attend the third annual Huawei Connect. Martin Varga shares some insights from the event.

Activate Intelligence

This year’s theme was Activate Intelligence. Huawei outlined its broad strategy to bring artificial intelligence (AI) to the masses in applications in manufacturing, autonomous driving, smart cities, IoT and other areas. AI will enable billions of new devices to be connected and transfer big data over the network.

The conference was held in Shanghai, China, at the Shanghai World Expo Exhibition Center. Huawei has put a lot of resources and effort into organizing the event which has shown its direct impact on over 26,000 in the attendance. The conference was organized perfectly, to the last detail (exhibition areas, keynotes and conference area, chill-out zones, etc.).

We have witnessed the demonstrations of various smart technologies, ranging from smart city applications, smart education, smart transportation. Smart everything.

One of the most impressive technology demonstrations was an AI that was able to translate Chinese to English and vice versa, as good as a human translator could. Microsoft in cooperation with Huawei states:

“Internal tests have shown, depending on the language, up to a 23 percent better offline translation quality over competing best-in-class offline packs.”

Huawei is also building an AI ecosystem of partners, that is targeted to exceed 1 million developers over the next three years with US$140m. 


We have had some interesting meetings with Huawei’s representatives. It was very pleasant to learn about Huawei’s visions for the near future. We are also glad to share the same vision for an exciting future. Huawei invests heavily into researching new technologies such as AI or IoT, in order to define practical use-cases that can be deployed into their product’s portfolio., as a software development company, is strongly focused on computer networking which is related to the Huawei’s vision to integrate AI into managing network operations.

Mr. Yang Jin  Director, Network Data Analytics Research  Huawei Technologies Co., stated: 

“Artificial Intelligence and Machine Learning will abstract data to make next-generation communication breakthroughs come to life.”

Feel free to contact if you have interest in any of the AI, AR/VR, IoT, Intent Driven Network, SDN, NFV, Big Data, and related areas. We can talk about challenges and how we can solve them together.

Martin Varga

Technical Business Development Manager

FRINX UniConfig is now powered by’s

What is is an SDK that provides components for the development of SDN controllers and applications based on well-established standards in the networking industry. It takes advantage of’s extensive experience from the involvement in the OpenDaylight platform and simplifies and speeds up the development, integration, and delivery of SDN solutions. also enables SDN programmers to use ODL services in a plain JavaSE environment. enables a major OpenDaylight distribution vendor to build and deploy their applications faster.

FRINX UniConfig

FRINX UniConfig provides a common network API across physical and virtual devices from different vendors. It leverages an open source device library, which offers connectivity to a multitude of networking devices and VNFs.

The API provides the ability to store intent and operational data from services and devices, enables to commit intent to the network, syncs from the network so that the latest device state is reflected in the controller, compares intended state and operational state and provides device and network wide transactions. All changes are applied in a way that only those parts of the configuration that have changed are updated on the devices.

The UniConfig framework consists of distinct layers, where each layer provides a higher level of abstraction. APIs of the lowest layer provides the ability to send and receive unstructured data to and from devices. The unified layer provides translation capabilities to and from OpenConfig. The UniConfig layer provides access to the intent and the actual state of each device plus the capability to perform transactions and rollback of configurations.

NETCONF devices can be configured via their native YANG models or via OpenConfig. Finally, FRINX UniConfig also provides service modules based on IETF YANG models for the configuration of L2VPNs, L3VPNs and enables the collection of LLDP topology information in heterogeneous networks.

The UniConfig Framework is based on open source projects like OpenDaylight and Honeycomb. It publishes all translation units under the Apache v2 license. Customers and integration partners can freely contribute, modify and create additional device models, which work with the UniConfig Framework.

How did PANTHEON’s help?’s helped to make UniConfig run and build faster.

Porting UniConfig to required no changes to the application code and has brought many measurable improvements. UniConfig now starts faster, has a smaller memory footprint, and most importantly – significantly reduces build time. packs many features, some of which are:

  • Client libraries for communication with ODL back end for Java, Python, and Golang
  • Enhanced NETCONF device simulator
  • Microservice friendly structure
  • Easy to use utilities for YANG model data serialization and deserialization
  • Example applications for integration with and others which enable your productivity
  • Inclusive of maintained examples and guides so the newcomers can start working immediately and be efficient

About FRINX  

FRINX offers solutions and services for open source network control and automation. The team is made up of passionate developers and industry professionals who want to change the way networking software is created, deployed and operated. FRINX offers network automation products and distributions of OpenDaylight and in conjunction with support services. They are proud to count service providers and enterprise companies from the Fortune Global 500 list among its customers.

About is a software research & development company focused on network technologies and prototype software. Yet, we do not perceive networks as endless cables behind switches and routers. For us, it is all software-defined. Clean and neat. Able to dynamically expand and adapt according to the customer’s needs.

We thrive in a world of network functions virtualization and arising need for orchestration. Focusing on SDN, NFV, Automotive and Smart Cities. Experts in OpenDaylight, FD.IO VPP, PNDA, Sysrepo, Honeycomb, Ligato and much more. powers datacenter management at

Complete automation and full forwarding plane programmability

Private data centers are the hot topic for companies and enterprises who are not willing to push all the data into public clouds. Kaloom Software Defined Fabric™ (Kaloom SDF) is the world’s first fully programmable, automated, software-based data center fabric capable of running VNFs efficiently at scale. This is the first data center networking fabric on the market that provides complete automation and full forwarding plane programmability.

 approached last year, knowing Pantheon’s intensive and long involvement in SDN, particularly iOpenDaylight project. OpenDaylight (ODL) is a modular open platform for orchestrating and automating networks of any size and scale. The OpenDaylight platform arose out of the SDN movement, in which has expertise and experience. Hence, it was a logical step to utilize this expertise in this project and leverage what has already been done.

Traditional ODL based controller design was not suitable for this job because of bulkiness of the Karaf based deployments. Kaloom requested a modern web UI which vanilla ODL platform does not provide. as a component library provides an opportunity to run ODL services such as: MD-SAL, NETCONF and YANG Tools in any modern web server stack, and integration with other components like MongoDB


The following architecture is starting to be like a blueprint for SDN applications today. We utilize the best of both worlds:

  1. MD-SAL, NETCONF and YANG Tools from ODL
  2. Updated modern web stack Jetty/Jersey and
  3. MongoDB as a persistent data store.


This is how Kaloom Fabric Manager (KFM) project has started. After several months of  customizing development, we have deployed a tailored web application which provides management UI for Kaloom SDF. We have changed and tailored our Visibility Package application to suit Kaloom’s requirements and specifics. This specialized version uses the name of KFM. The architecture diagram above shows details/internals of the KFM and how we interconnect with Kaloom’s proprietary Fabric Manager/Virtual Fabric Manager controller devices.

The solution for physical data centers based back-end of the KFM with NETCONF plugin provides REST services to the Angular UI, which is using our Network Topology Visualization Component for the better topology view visualization and user experience. Using these REST endpoints, it is easy to send specific NETCONF RPC to the Kaloom SDF controllers.

While working on this next-gen Data Center Infrastructure Management software, we have realized that integrating all moving parts of the system is a crucial step for final delivery. Since different teams were working on different parts, it was crucial we could isolate the part of the system and adapt it to the Kaloom SDF as much as possible. We have used our field-tested NETCONF device simulator from our package to deliver the software which was tested thoroughly to provide stability of the KFM UI.

Kaloom SDF provides a solution for physical data centers administrated by Data Center Infrastructure Provider (DCIP) users. A physical data center can be easily sliced to virtual data centers offered to customers, called virtual Data Center Operator (vDCO) users. The DCIP user can monitor and configure the physical fabrics – PODs of the data center. KFM WEB UI shows the fabrics in topology view and allows updating the attributes of fabric and fabric nodes.

Topology View of Fabric Manager

Advantages for DCIP

The main task of DCIP user is to slice the fabrics to virtual data centers and virtual fabrics. This process involves choosing servers through associated termination points and associating them with the newly created virtual fabric manager controller. Server resources are used through the virtual fabric manager by vDCO users.

vDCO users can use the server resources and connect them via network management of their virtual data center. vDCO can attach server ports to the switches with proper encapsulation settings. After the switch is ready, vDCO can create a router and attach switches to it. The router offers different configuration possibilities to follow vDCO user’s needs: L3 interface configuration, static routing, BGP routing, VXLANs and many more. KFM offers also topology view of virtual data center network, so you can check relations between servers, switches, and routers.

Topology View of Fabric Manager

For more details about the KFM UI in action, please see the demo video with NETCONF simulator of Kaloom SDF bellow, or visit kaloom or kaloom academy runs 5G on xRAN

In April 2018, the xRAN forum released the Open Fronthaul Interface Specification. The first specification made publicly available from xRAN since its launch in October 2016. The released specification has allowed a wide range of vendors to develop innovative, best-of-breed remote radio unit/head (RRU/RRH) for a wide range of deployment scenarios, which can be easily integrated with virtualized infrastructure & management systems using standardized data models.

This is where came to the scene. We became one of the first companies to introduce full stack 5G compliant solution with this specification.

Just a few days spent coding and utilizing the readily available components, we created a Radio Unit (RU) simulator and an SDN controller to manage a group of Radio Units.

Now, let us inspect the architecture and elaborate on some important details.

We have used, specifically the generic NETCONF simulator, to set up an xRAN Radio Unit (RU) simulator. xRAN specifies YANG models for 5G Radio Units. NETCONF device library is used as a base which made it easy to add custom behavior and 5G RU is ready to stream data to a 5G controller.

The code in the controller pushes the data collected from RUs into Elasticsearch for further analysis. RU device emits the notifications of simulated Antenna Line Devices connected to RU containing:

  • Measured Rx and Tx input power in mW
  • Tx Bias Current in mA (Internally measured)
  • Transceiver supply voltage in mV (Internally measured)
  • Optional laser temperature in degrees Celsius. (Internally measured)

*We used device xRAN-performance-management model for this purpose. as a 5G controller

With we created an OpenDaylight based SDN controller that can connect to RU simulators using NETCONF. Once RU device is connected, telemetry data is pushed via NETCONF notifications to the controller, and then directly into Elasticsearch.
Usually, log stash is required to upload data into Elasticsearch. In this case, it is the 5G controller that is pushing device data directly to Elasticsearch using time series indexing.
On Radio Unit device connect event, monitoring process automatically starts. RPC-ald-communication is called on RU device collecting statistics for:

  • The Number of frames with incorrect CRC (FCS) received from ALD – running counter
  • The Number of frames without stop flag received from ALD – running counter
  • The number of octets received from HDLC bus – running counter

*We used xran-ald.yang model for this purpose.
The 5G controller is also listening to notifications from the RU device mentioned above.

Elasticsearch and Kibana

Data collected by the 5G controller via RPC calls and notifications are pushed directly into Elasticsearch indices. Once indexed, Elasticsearch provides a wide variety of queries upon stored data.
Typically, we can display several faulty frames received from “Antenna Line Devices” over time, or analyze operational parameters of Radio Unit devices like receiving and transmitting input power.
Such data are precious for Radio Unit setup, so the control plane feedback loop is possible.

By adding Elasticsearch into the loop, data analytics or the feedback loop became ready to perform complex tasks. Such as: Faulty frame statistics from the “Antenna Line Devices” or the  Radio Unit operational setup

How do we see the future of xRAN with

The benefit of this solution is a full stack xRAN test. YANG models and its specifications are obviously not enough considering the size of the project. With 5G xRAN, we invite the Radio Unit device vendors and 5G network providers to cooperate and build upon this solution. Having the Radio Unit simulators available and ready allows for quick development cycle without being blocked by the RU vendor’s bugs. has been used as a 5G rapid application development platform which enables quick xRAN Radio Unit monitoring system setup.
We can easily obtain xRAN Radio Unit certification against ‘ 5G controller’ and provide RU simulations for the management plane.

Visit page, and check out our GitHub for more details. & OpenDaylight & OpenDaylight – dedication and continuous support

The year was 2001, when, the software research, and development company was established in Bratislava, Slovakia. It had a focus on computer network technologies with love and care for open-source software development.

12 years into the life of, on 22 March 2013 to be exact, OpenDaylight (ODL) project has seen the first daylight with the initial code drop. It had the ambition to be the biggest and most successful, open-source, Software Defined Networking (SDN) controller in the world.

ODL is a collaborative open-source project aimed to speed up the adoption of SDN and create a solid foundation for Network Functions Virtualization (NFV). ODL was founded by the global industry leaders such as Cisco, Ericsson, Intel, IBM, Dell, HP, Red Hat, Microsoft and and open to all.’s nurture for OpenDaylight goes back, to when it was forming. In a sense, had led the way to define how an SDN controller is and should be. This requires dedication, which was proven over the years with an extensive amount of contribution thanks to its expert developers.

In ODL’s lifespan, had always been within the top 5 contributors.

Considering the committer’s view on the left chart, the top committer proved his dedication, by having over 20% of the total commits single-handedly – Robert Varga.

State of Commits had made 12,098 commits with sharp developers in over 50 ODL projects throughout OpenDaylight’s lifespan. That is 740,919 added and 664,588 removed lines of code.

That translates roughly around 20 great novels written in.

ODL continues to be a great example of what open-source software is and how international contributors can collaborate beautifully to create the next great thing.

In the last week of November 2018, Bitergia, a software development analytics company, published a report on the past and current status of the OpenDaylight project, which plays a significant role in’s offerings and solutions.’s CTO Robert Varga is leading the list of per-user-contributions to the source code of OpenDaylight, with over 980 commits to the source code in Q3 of 2018. This achievement further establishes’s position as one of the largest contributors to the OpenDaylight project.

As for the list of companies that contribute to the source code of OpenDaylight, is the 2nd largest contributor for Q3/2018, with 1034 commits. We were just 34 commits shy of the top contributor position, which belongs to Red Hat.

Inside OpenDaylight

Now, let’s touch on the OpenDaylight’s internal governance, TSCs, and PTLs, etc.

ODL is now a founding member of LF Networking (LFN), an entity that integrates the governance of participating projects to enhance operational excellence, simplify member engagement, and increase collaboration across open source networking projects and standards bodies.

OpenDaylight Technical Steering Committee (TSC) provides leadership regarding the technical direction of the ODL platform, as well as guidance on collaborative practices. Developers are elected by the ODL community to serve one-year terms. Currently, there are only 13 TSC members on duty and proudly a’s CTO – Robert Varga is one of them.

Project Technical Lead (PTL) is the leader of the OpenDaylight projects, who gets to decide if any of the proposed changes become part of the project or not. They are elected by the committers thanks to their expertise in the field.

Anyone can contribute to ODL which is an open-source project, but PTLs are the ones who lead the project to the prospected direction. If you want to learn how to get started with OpenDaylight, you can read more as a developer or as a user.

ODL had started with just five projects and thrived. Currently, there are 91 active projects and proudly involved, contributed to a very big proportion of it. Just another example of its expertise on the subject.

Apart from’s top contribution to ODL, has always been a strong supporter of the open-source community. This support covers a wide range of developers from local private open-source development groups to international open-source platforms. Such examples continuously supports our Local Open Source Networking User Groups, (OSNUG) OpenDaylight User Groups, OSS weekend meetings to name a few. But, the most generous of was to open-source one of its best developments, called

Future of OpenDaylight

The development of OpenDaylight Sodium is already producing significant improvements, some of which are finding their way to the upcoming Service Release of Neon.

These are the test results for OpenDaylight Neon SR2. We have recorded significant improvements in the following areas:

  • Datastore snapshot-size: ~49% reduction
  • Processing time: ~58% reduction
  • In-memory size: ~25% reduction
  • Object count: ~25% reduction
  • NodeIdentifier: ~99.9% reduction
  • AugmentationIdentifier: ~99.9% reduction

These enhancements will also be present in the future, the long-awaited release of Sodium. in Data Center Management

The advantages of deploying in Data Center Infrastructure Management (DCIM)

The DCIM market is continuing to evolve and large enterprises continue to be the primary adopters of new DCIM software solutions. The goal of a DCIM software initiative is to provide administrators the ability to identify, locate, visualize, and manage all physical data center assets with a holistic view. has developed based on OpenDaylight in Java SE. It is a great software for implementation of customized DCIM solutions such as SDN controller, NFV orchestrator or VNF management etc.

Some of the great features, you will benefit from while managing your data center are listed below. scheme and use-case description

Model-driven approach implements a model-driven approach to data center infrastructure management. Because of the common models being used, intercommunication of configurational, operational, monitoring and telemetry data in all the parts of the systems becomes possible which are based on

These models define structure, syntax, and semantics of the data processed by each part of the system. Usage of standardized models by vendors (e.g., models from OpenConfig or IETF) leads to seamless migration from one vendor to another.

Scalability and controller hierarchy

  • Horizontal scalability – supports clustering. A feature, which allows horizontal scaling of the system by adding more instances (nodes) of the controller into a cluster
  • Controller hierarchy – NB plugins of allow the implementation of upper layer applications running as micro services and performing operations using the controller’s NB plugin API. It is also possible to design a hierarchy of controllers where the upper layer controller(s) performs operations using the lower layer controller’s NB plugins. One of the implemented NB plugins is a plugin that implements the NETCONF protocol. Using this NB plugin in the hierarchy of controllers makes possible to manage the lower layer controllers as NETCONF devices.

Security is implemented in Java, which is in nature a Type-Safe programming language. Type safety leads to more secure software than other software written e.g., in C/C++, while reaching a good performance. The model-driven approach and the source code generation also support software security.

These features minimize the possibility of error in the code by implementing the requirement of the verification of the input data from external applications and connected devices. Cyphering, authorization, and usage of these certificates are the matter of course.

Legacy and heterogeneous systems support implements the main SDN standards e.g., NETCONF, RESTCONF, YANG. Moreover, the legacy technologies that are already implemented in makes SNMP southbound plugin possible. This shows that the capability of being used not only in green-field deployments (implementing the system from scratch) but also brown-field deployments where it is needed to manage a heterogeneous set of networking devices.


As a software design principle, the model-driven approach speeds up and simplifies implementation of extensions with the architecture of results in great extensibility. The architecture of the defines Northbound – NB and Southbound – SB plugins implementations as a model-driven module.

NB & SB Plugins

NB plugins enable the communication of the controller with the upper layer applications. Such as dashboards, upper layer controllers, interDC orchestrators etc. The upper layer applications can be implemented as an external service or as a native module of the controller.

The upper layer applications mostly implement application logic, business logic, administration interfaces, data analytics, data transformation etc. NB plugins can be used to:

  • submit commands to the SDN controller,
  • send notifications to upper layers by the controller,
  • send telemetry data to upper layers by the controller,
  • monitor the controller by upper layers,
  • read the operational data of the controller and devices orchestrated by the controller,
  • the configuration of the controller itself or specific device orchestrated by the

SB plugins implement protocols and technologies extending the SDN controller capabilities with new standards and technologies allowing connections of new network devices. SB plugins can be used for:

  • the configuration of networking devices,
  • fetching operational (state) data of the networking devices,
  • receiving telemetry data,
  • monitoring of devices,
  • submitting commands to the devices,
  • receiving notifications from devices.

Models and model-driven approach simplify the implementation of new plugins and upper layer applications because the usage of these models allows source code generation of classes (OOP construct) and related code which verifies the syntax and semantics of the data minimizes the probability of errors in implementation caused by human interactions.

If you would like to know more about and how it could improve your business, visit or our Product Page. UI: Network Topology Visualization Component had developed a network topology visualization component. Its main purpose is to develop a responsive and scalable front-end network topology visualization application on top of the The topology visualization component enables you to visualize any topology on any device with a web browser. It will also be included within the distribution package.

We, as a successful software development company, were compelled to create our own solution based on the technologies we know and like to use. Other existing commercial applications fail to cover the visualization of the network topology sufficiently.

Watch our series of videos on the Visibility Package here

The experience of the development of Visibility Package, which is a software component, used to gather and visualize network topology data from different networks, network management systems, and cloud orchestrators, led developers to create a better solution. Using this, the network topology visualization component will significantly reduce your time spent on development.

Check out our GitHub

We have developed the topology visualization component as an Angular component, which can be used in Angular applications to create network visualization applications. Thanks to its modularity, customizability the network visualization component can visualize any network from small company networks to large-scale data centers with thousands of nodes and links.

Picture(1): A screenshot of a spine-leaf network visualization sample.


As every use case’s demands, requirements, and scale widely differ from each other, a scalable and universal component was needed. That is why we have based the topology visualization component on the Angular framework, which allows rapid development of responsive, modular and scalable applications.

Our previous experiences showed us that SVG technology for topology visualization is not performing well with very large network topologies. That is why we decided to use HTML5 Canvas instead. Canvas provides seamless animations and has great responsiveness even with thousands of nodes and links.

Some of the great features of the topology visualization component are

  • Ease of use

The topology visualization component includes extensive documentation and examples to help the developer while application creation. With Angular CLI, a basic application can be set up in minutes.

  • Customizability

The basic application could easily be customized to the desired state. We have developed the topology visualization component with customization in mind.

  • Modularity

The topology visualization component is developed as separate modules. The developer can decide and use which modules are needed for a particular project and add other modules whenever they are required.

  • Speed & Responsiveness

Angular and HTML5 Canvas is used to ensure even with large amounts of data the application will be running effortlessly.

  • Scalability

The topology visualization component works with small network topology with few nodes and links but truly shines with large-scale topologies. We are continually adding new features based on our client’s requests and needs. Watch this space out for many exciting features to be announced in the near future.

How can speed up the 5G connectivity deployment! is a Software Development Kit (SDK) which provides components for the development of Software Defined Networking (SDN) controllers, based on commonly used standards in the networking industry. We have used our experience from the OpenDaylight (ODL) to create, which will empower you to simply develop, integrate and deploy a tailored SDN controller.

An SDN controller plays an essential role as an orchestrator of networking infrastructure in 5G networks. It is used not only for the configuring and monitoring of the physical routers and switches, but also for managing virtual networks of Virtual Machines (VMs) and containers. Among many great benefits of an SDN controller (or set of interconnected SDN controllers) is that it has a holistic view of the network. An SDN controller is also used for connecting User Equipment (UE) or Customer Premise Equipment (CPE) to data centers and enables technologies such as network slicing and edge computing to be used in the 5G.

Network slicing requires the ability of configuration and monitoring of all networking devices (physical or virtual) along the path of the traffic. For edge computing purposes, it is necessary to automate the configuration of the devices in order to support 5G scenarios such as UE registration. The SDN controller enables technologies such as network slicing and edge computing to be used in 5G.

Figure 1: Overview of a 5G network architecture


Figure 1 (above) shows how the SDN controller based on uses southbound plugins to read and write configuration and state of networking devices of WAN network and physical or virtual networks in data centers both core and at the edge. supports many south-bound protocols for network orchestration, such as NETCONF and RESTCONF protocol plugins.  The number of vendors and devices supporting these protocols grow every year. We believe that many devices and appliances in Radio, Edge, and WAN will speak these protocols in the 5G era. also contains Pantheon’s SNMP SB plugin for integration with legacy systems, and for heterogeneous environments where the old and the new mix.

The modular architecture of allows adding new plugin implementations to other protocols. exposes the configurational and operational data of all the devices to an upper layer where a business logic of administration and automation applications can be implemented. The APIs can also be accessed remotely via the REST API and other communication methods can also be implemented as northbound plugins. These upper layer applications can be designed as micro services or as a part of the SDN controller.


Figure 2: An example of a 5G network using data plane

As mentioned above, it is necessary to use an SDN controller also for orchestration of virtualized networks in data centers. An open source project is one particular example of using such technology. implements configurable data plane running in user space level, not in kernel space level. Thanks to this feature, the data plane can be deployed as an ordinary micro service e.g., as a container. can be used for interconnection of containers or VMs in data centers and it is possible to orchestrate all of the instances of by based SDN controller.

Figure 3: An example of a 5G network and integration with other IoT networks

Among connecting mobile phones and tablets to the network, 5G will also enable a vast number of Internet of Things (IoT) devices to be connected to the internet and to communicate directly with each other. IoT solutions can leverage SDN controllers for similar purposes as other 5G technologies do. Specific VNFs for IoT can be deployed and orchestrated by an SDN controller, whether that be at the edge or in the core data centers. Network slicing could be used for smart cars and smart cities solutions as it is shown in Figure 3(above)

This way the 5G networks will enable adoption of IoT in everyday human life. The number of IoT devices expected to connect to internet in upcoming years is substantial. According to Gartner’s predictions, IoT technology will be in 95 percent of electronics by 2020 [1]. According to another forecast from Cisco, 50 billion devices would connect to the internet by 2020 [2].

Here is a brief summary of features and benefits provided by

  • The modular architecture of southbound plugins allows implementation of communication with physical and virtualized networking devices.
  • Configurational and operational data of all orchestrated devices is exposed as a northbound plugin for administration, automation and analytics purposes.
  • MD-SAL (Model Driven Software Abstraction Layer) – provides data store and services to be used by other parts of SDN controller such as southbound and northbound plugins. The data processed by MD-SAL are modeled in YANG modeling
  • NETCONF and RESTCONF southbound plugins are available and field-tested.
  • SNMP plugin for integration with legacy systems is also available.
  • NETCONF protocol can be used by for orchestration of data plane to interconnect VMs or cloud-native applications in data centers.
  • has a light weight hardware footprint, hence responds promptly.
  • is ready for micro service environment.
  • provides faster and cheaper testing and CI.
  • is an easy tool to develop and deploy SDN in 5G networking infrastructures.

Ready to test how works? Send us an email at and we will provide you with a trial version.






PANTHEONtech at Open Networking Summit (ONS) 2018 had a unique opportunity to participate at the Open Networking Summit (ONS) 2018. The central topic of the ONS 2018 was data center solutions: ONAP and Kubernetes based systems. Also, several new projects under the wings of Linux Foundation were introduced. For example “Acumos AI“, “Arkaino Edge stack” and DANOS (Disaggregated Network Operating System project) which is the operating system for white-box switches. has traditionally participated in the OpenDaylight (ODL) as well as the development and we launched our product in the ONS. changes conventional OpenDaylight attitude on how to build SDN controller applications, making them smaller, nimble and micro-service ready. caught the attention of the OpenDaylight community members, as well as customers struggling with real-life OpenDaylight deployments. This solution helps to consume and deploy OpenDaylight services faster, with a lower cost of ownership. Faster builds, quick test runs and smaller distribution sizes are the right way to proceed. brings also added value into the ONAP eco-system providing runtime for ONAP’s SDN-C. We are continuously updating the community with use-case examples and also video use-cases


One of the projects, in which we participate in the community, is The Fast Data Project ( For the community, we presented Ligato; Honeycomb’s younger brother. It is an ’easy to learn and easy to use’ integration platform. We love to see, that the community is growing larger, not only in the number of contributors but in the number of projects and use-cases as well.

We were also pleased to accept an invitation to an introduction of a new project “Dual Modes, Multi-Protocols, Multi-Instances” (DMM), where we discussed use-cases and integration paths from the current networking stack. community has the potential of further growth, especially as we see the shift of the networking industry from closed-source, hardware-based network functions to an open-source software-based solution.

ONS 2018 was an exciting opportunity for us. It was a forum where we could easily share our knowledge and provide a much-needed innovation. Let’s see how artificial intelligence and machine learning will change the landscape of networking in the upcoming years. See you at the next ONS event!


PyCon SK 2018

Thanks to, I had an opportunity to attend PyCon SK conference that took place on March 9 – 11, 2018 in Faculty of Informatics and Information Technologies of Slovak University of Technology, Bratislava. Its intent was to promote Python, spread open source technologies and open source ideas. Speakers were professionals from various areas of software development – from documentation writers through big data analysts to coders as such. Thus, the lectures covered a wide area of topics and possibly anyone could have found their cup of tea.

Friday, 9 March

The day started with Alex Ellis’s talk about OpenFaaS (Functions as a Service). He introduced the OpenFaaS project, made an account on how to build one’s own serverless functions in containers using Docker, or Kubernetes, or other orchestrators through the extensible architecture. In the talk, practical demonstrations of the use of serverless functions were made, such as voice-driven getting of information on weather and other stuff, turning black-and-white pictures to colourful in one click, etc.

Later on talks continued with Mikey Ariel, also known as That Docs Lady. She talked about docs and the community. In her talk, she pointed out various types of project documentation – from READMEs, through quickstart tutorials, to error messages. The talk introduced or re-acquainted us with topics such as content strategy, docs-as-code, optimized DevOps for docs, and contribution workflows. One of many witty observation she made was: “Instead of documenting a bunch of bugs, why not to fix them?!”

Saturday, 10 March

For me, personally, Saturday provided few highlights.

Anton Caceres talked about big data analysis, and libraries and tools that Python provides in this area of programming. What he emphasized as core skills of data scientists were ability to read data, to visualize it, to formulate right questions, and to endorse one’s imagination while answering those questions by visual presentation of the data.

Another interesting one was by Michael Kennedy. The topic was “Pythonic code, by example”. He explained the concepts of writing idiomatic code in Python (i.e. Pythonic code) that is most aligned with the language features and ideals. This talk took us on a tour of some of the more important pythonic concepts using many examples of perfectly functional Python code that was non-pythonic with pythonic equivalents. Most of the code examples were written in Python 3.5.

Ryan Kirkbride gave the last talk of the day; or better said a performance. He suggested that while coding is mostly quite a lonely activity in which a coder interacts with the program, there is also a way to make coding an interactive activity shared with a community. He himself provided an example by live coding a program that generated music. The idea of sharing an experience of coding with others underlined the idea behind the conference – collaboration, sharing and community.

Sunday, 11 March

On Sunday, we had a look at end-to-end testing of UI of the application. Vladimir Kopso spoke about writing an end-to-end testing automation Framework and some tips for making the code cleaner. He also spoke about parallel running of multiple test suites in Docker containers and time saving this approach brought to running automation test suites.

Tibor Arpáš presented his ideas on how to make writing code in various IDEs more efficient and how to give the coder valuable information on their code. He suggested that when running a code, valuable information is created about the code itself. He came up with few ideas on how to display this information together with the code at one place.

To sum it up, in three days which were full of Python and open source topics, we learned a lot from the speakers. Some of them were better, some of them a bit boring, but there were few that were highly motivating and engaging. Community was the leitmotif that appeared across almost all of them and was apparent also in the overall atmosphere of openness in the hallways, where you could address speakers and discuss with them.

Big thanks to and to the organizers of PyCon SK 2018 for this amazing experience.


Daša Šimková

YANG Tools 2.0.1 integrated in ODL Oxygen

YANG Tools 2.0.1 integrated in OpenDaylight Oxygen

OpenDaylight’s YANG Tools project, forms the bottom-most layer of OpenDaylight as an application platform. It defines and implements interfaces for modeling, storing and transforming data modeled in RFC7950, known as YANG 1.1 — such as a YANG parser and compiler.

What is YANG Tools?

Pantheon engineers started developing yangtools some 5 years ago. It originally supported RFC6020, going through a number of different versions. After releasing yangtools-1.0.0, we introduced semantic versioning as an API contract. Since then, we have retrofitted original RFC6020 meta-model to support RFC7950. We also implemented the corresponding parser bits, which were finalized in yangtools-1.2.0 and shipped with the Nitrogen Simultaneous Release.

This release entered its development phase on August 14th 2017. yangtools-2.0.0 was released on November 27th 2017, which is when the search of an integration window started. Even though we had the most critical downstream integration patches prepared, most of down-streams did not have their patches even started. Integration work and coordination was quickly escalated to the TSC. The integration finally kicked off on January 11, 2018.

Integration was mostly complicated by the fact that odlparent-3.0.x was riding with us, along with the usual Karaf/Jetty/Jersey/Jackson integration mess. It is now sorted out, with  yangtools-2.0.1 being the release to be shipped in the Oxygen simultaneous Release.

What is new in yangtools-2.0.1?

  • 309 commits
  • 2009 files changed
  • 54126 insertions(+)
  • 45014 deletions(-)

The most user-visible change is that in-memory data tree now enforces mandatory leaf node presence for operational store by default. This can be tweaked via the DataTreeConfiguration interface on a per-instance basis, if need be, but we recommend against switching it off.

For downstream users using karaf packaging, we have split our features into stable and experimental ones. Stable features are available from features-yangtools and contain the usual set of functionality, which will only expand in its capabilities. Experimental features are available from features-yangtools-experimental and carry functionality which is not stabilized yet and may get removed — this currently includes ObjectCache, which is slated for removal, as Guava’s Interners are better suited for the job.

Users of yang-maven-plugin will find that YANG files packaged in jars now have their names normalized to RFC7950 guidelines. This includes using the actual module or submodule name as well as capturing the revision in the filename.

API Changes

From API change perspective, there are two changes which stand out. We have pruned all deprecated methods and all YANG 1.1 API hacks marked with ‘FIXME: 2.0.0’ have been cleared up. This results in better ergonomics for both API users and implementors.

yang-model-api has seen some incompatible changes, ranging from renaming of AugmentationNode, TypedSchemaNode and ChoiceCaseNode to some targetted use of Optional instead of nullable returns. Most significant change here is the introduction of EffectiveStatement specializations — I will cover these in detail in a follow-up post, but these have enabled us to do the next significant item.

YANG parser has been refactored into multiple components. Its internal structure changed, in order to hide most of the implementation classes and methods. It is now split into:

  • yang-parser-reactor (language-independent inference pipeline)
  • yang-parser-rfc7950 (hosting baseline RFC6020/RFC7950 parser)
  • yang-parser-impl (being the default-configured parser instance)
  • and a slew of parser extensions (RFC6536, RFC7952, RFC8040)

There is an yang-parser-spi artifact, too, which hosts common namespaces and utility classes, but its layout is far from stabilized. Overall the parser has become a lot more efficient, better at detecting and reporting model issues. Implementing new semantic extensions has become really a breeze.

YANG Codecs

YANG codecs have seen a major shift, with the old XML parser in yang-data-impl removed in favor of yang-data-codec-xml. yang-data-codec-gson gains the ability to parse and emit RFC7951 documents. This allows RFC8040 NETCONF module to come closer to full compliance. Since the SchemaContext is much more usable now, with Modules being indexed by their  NameModule, the codec operations have become significantly faster.

Overall, we are in a much better and cleaner shape. We are currently not looking at a 3.0.0 release anytime soon and can actually deliver incremental improvements to YANG Tools in a much more rapid cadence than previously possible with the entire OpenDaylight simultaneous release cycle being in the way.

We already have another round of changes ready for yangtools-2.0.2 and are looking forward to publishing them.

Robert Varga

race track joins Slovensko.Digital has recently joined the watchdog community Slovensko.Digital.

Since 2015, Slovensko.Digital has been advocating for better electronic services provided by the Slovak state. It is monitoring inefficient spending of public funds on digital projects and fighting for improvements in solutions provided by the government. The association also provides consulting, in forms of know-hows and analytical capacities. It also aims to assist the government and public administration in reaching these goals.

We believe in the idea, that high-quality public electronic services should be a standard.

With our skills and experience, we are confident that the Slovak public can get better digital services from the state, public administration and their agencies.

This is one of several important groups and memberships that is part of.

Martin Firák

PMD-85 & Personal Computing in Czechoslovakia

At the end of November 2017, a very special talk took place at Banská Bystrica’s Matej Bel University. Within the broader “Extrapolations and the Scientific Colloquium” program, a lecture featuring the legend of Czechoslovak computing and father of the PMD-85, Roman Kišš, took place. Why is he a legend and why was it a must for me to see him talk, even though I only received the invitation for the event three hours before its launch?

Roman Kišš is the inventor of the most successful Czechoslovak computer of the 1980s, the PMD-85. He has also developed its Didaktik Alfa clone. In case you attended an elementary or secondary school, or the youth Pioneer organization in 1980s communist Czechoslovakia, you definitely must have had a close encounter with a PMD.

An 8-bit computer, built by Tesla Piešťany using the MHB 8080A processor, it was a clone of the Intel 8080. With 48KB RAM and 4KB ROM, it was considered ahead of its time. In spite of consisting of low-quality components, its performance was unmatched.

My first ever experience with a computer in the 1980s, was with a PMD. Roman Kišš’s work, from a technological point of view, was on par with what Jobs and Wozniak had done in the US.

When I had the chance to go see Mr. Kišš’s lecture, I could not have refused. Nostalgia, curiosity, and the almost mystical aura encompasses his personality.

The lecture was divided into two segments:

  • PMD-85 and how it came to life
  • Microsoft Azure

I was mainly curious about the PMD-85-focused segment.

During the first segment, Roman Kišš discussed how things worked in communist Czechoslovakia (or, how nothing worked). Stores had no supplies, nothing was in stock and anything you were able to lay your hands on was either rubbish, or stolen from somewhere.

There was a popular saying that if you stand out of the crowd, your head will be chopped off. Or, as a late 80s punk song recommended, everyone shall write with a blue pen. Look the same, behave the same, and do not deviate from the crowd. Unfortunately, many of these habits still persist, especially one that has become a part of our folklore: do only what we are told to. This is also called the “zero fails given approach.”

Mr. Kišš talked a lot, but, unfortunately, not enough about technicalities regarding the PMD. He discussed organizing his work, research and people, which was of great value to me. He talked for over an hour and even though he swamped us with information, it was not even a tenth of what he’d want to say.

For me, the main takeaways were three messages that I’ve been thinking about for weeks to come.

01: You need to leave. You’ve outgrown us.

When Roman Kišš reached the stage that everybody in Czechoslovakia wanted a PMD, his head of team at Tesla Piešťany had a chat with him.

“Roman, we’ll need you to leave. You’ve outgrown us.”

To this, Roman‘s reply was brief,

“It’s your fault that you haven’t moved an inch!”

I could immediately imagine a young enthusiast, not really fitting the “zero fails given” environment. The main problem was, that they could not afford to employ him, unless he was supposed to be a department of his own. Without them as his colleagues. Of course, you would not want to employ a colleague who turned everyone into his enemies by achieving something within several months, that others had been struggling with for years without any results.

With the money Mr. Kišš had earned for patents and sales of older PMI 80 computers, he was able to put together enough of his own resources, to fund a team of enthusiasts who had helped him with prototyping. What were his objectives? Motivating people with potential and willing to work.

He built an exclusive club of co-workers, which a number of people wanted to join. He paid for team buildings in exclusive restaurants, keeping open tabs. Even though, looking back, it might look like PMD-85 was an achievement of an individual, it was, in fact, the achievement of a team. The PMD-85 computer was a proof of concept which needed transforming into a product. Kišš knew this and he did everything that could have been done.

He managed to build a team which was much better and stronger than the communist economic model, based on five-year plans, could imagine, even in its representatives’ wildest dreams full of shots fired at Saint Petersburg’s Winter Palace. He’d done everything he could so that the team could continue growing. Taking trainings and improving their education. He had a clear target and kept focused at achieving it. A good team leader keeps his target in the cross-hairs.

02: You can’t be both a good father and a great professional

This sentence came together with an explanation: you can’t be perfect doing both at the same time. You can’t be completely devoted to both your work and your family. One of them will always be sidelined. Mr. Kišš admitted that he didn’t spend enough time with his family as he spent almost all of it at work. This made me think – what has changed compared to the 1980s?

Team work is one of the most important soft skills, yet you come out of school without ever having heard of it. We have better access to better information. We have the tools and procedures how to learn better and faster. We’ve got everything we need, but is that enough? Most probably not.

Having the means but lacking motivation is worse, than not having the means at all. We primarily need motivation to work hard – this was true then as it is now. However, everything is a matter of scale: do I work hard because I want to improve myself and advance the team, or do I work hard because I always want to be the best?

In the first case, you are cooperation-oriented, leaving enough room for both being a good father and a great professional. However, the second case is strongly competitive and leaves room for nothing else; the drive to be the best always needs someone to compete with.

And now for the philosophical question: is it better to be a strong member of a strong team which would also be able to thrive without a specific individual, or be the dominant member having a fully dependent team, which, if losing the dominant member, ceases to exist? I’d go for being a strong member of a strong team.

What about you, dear reader?

03: Money should never be your goal, only the means for reaching one

As I already mentioned, Roman Kišš spent a lot of his own resources on materializing his ideas. He spent it on people, literature, electronic components, and whatever he currently needed. Making money has never been his goal. As he mentioned, he received only 4 Kčs per each o125 000 pieces sold of the PMDs-85.

He also earned a little designing the Didaktik Alfa computer for Didaktik Skalica. He’s invested all the funds into moving his projects forward; and to live off during his emigration period in Canada. This was after he had realized, there is no room for hos further growth in Czechoslovakia. Also, no one wanted to employ him any more, but that’s a different story.

After relocating to Canada, he had to start from scratch. He’d been doing a semi-legal PhD. This means, he had done everything other PhD students were doing at the university, but without receiving a salary. What was his reward? The professor who led his research arranged that Kišš could attend all the lectures and take all the exams. Almost a normal university study – without receiving a diploma at the end.

His motivation was purely about acquiring knowledge. However, he did not hesitate and accepted: after you’ve reached certain skills, no one is interested in what you’ve studied, only in what you know. Your knowledge is the only thing you truly own. Roman Kišš’s knowledge and skills have helped him reach much more net worth than those 500 000 Czechoslovak crowns he spent for his diploma-less studies.

Here I have to ask myself: what’s the sum of all my knowledge when Google has an outage? It may be an over-used phrase, yet I truly believe that this gentleman is a living example that everyone should do what they consider meaningful, not what makes them a fortune. Do your best and money will come.

…and back to PMD-85

The PMD-85 computer is a piece of technology holding a very special place in my life. It’s primarily a personal nostalgia, as it was the first computer I got as a third grader. My father built it from components that he honorably stole, which was the standard way of acquiring most possessions in a socialist economy.

I started learning BASIC first, later switching to Pascal at Banská Bystrica Pioneer organization, which was, by the way, located in the same building where has its Banská Bystrica office now. Later on, a second piece was added to my private collection.

I took them both to Roman Kišš’s lecture to meet their creator. I got them both signed. And I thanked him for PMD-85 being responsible for my career, for doing stuff that I truly like, for living.

Mr. Kišš seemed to be happy, and so am I. Thanks to Mr. Kišš, PMD-85 and my father.

Martin Bobák
Technical Leader

KubeCon 2017, Austin @ KubeCon & CloudNativeCon 2017

At the beginning of December 2017, we attended the KubeCon & CloudNativeCon 2017 conference in Austin, Texas. The conference, organized by the Linux Foundation, brought together leading contributors in cloud native applications and computing, containers, micro-services, central orchestration processing and related projects.

KubeCon 2017, Austin

More than four thousand developers, together with other people interested in cloud-native technologies, visited the event in Austin. The growing number of attendees is a testimony to the rising importance of Kubernetes and containerized applications for companies of all sizes.

The schedule was full of talks about various CNCF technologies such as Kubernetes, Prometheus, Docker, Envoy, CNI and many others. “Kubernetes is the new Linux,” pointed out Google’s Kelsey Hightower in his keynote, predicting bright future for these technologies.

KubeCon 2017, Austin

In addition to talks, the sponsors at KubeCon showcased their projects in a huge exhibition hall. The booth presented a project our friends from Cisco contributed to – VPP centric network plugin for Kubernetes which aims to provide the fastest connectivity for containers by bypassing the kernel network stack. During the presentation of the project, we were involved in many conversations with attendees from various companies, which proves their interest in the solution.

KubeCon 2017, Austin

Rastislav Szabo, Lukas Macko @ IETF 100 Hackathon in Singapore

The IETF 100 Hackathon wrapped up several weeks ago in steamy Singapore. Over two hundred participants spent the weekend on November 11th – 12th discussing, collaborating and developing sample code, solutions and ideas that show practical implementations of IETF standards. The theme was IPv4-IPv6 Transition Technology Interop. We, at, had to be part of it.
IETF 100 Hackathon, Singapore

Our idea and testing

If you have never seen this YouTube video on IPv6, you really should.

It goes on between two characters, one of whom is an IPv6 proponent while the other one really admires NATs: and that was our team. We wanted to test, if the “new” Internet would run on IPv6 plus NAT64, or whether we can keep the “old” Internet working forever through the IPv4 address sharing mechanisms.

The room started to fill quickly after the doors opened. We displayed a poster that introduced the project and after a brief kick-off presentation got to work. Our table, full of power outlets, switches, gateways, routers and patch cables, attracted the most interest among the hackathon participants.

Scheme describing our challenge.

Transition technology interop.

Testing and findings

The hackathon was the first opportunity for interop testing of VPP DS-Lite AFTR as well as NAT64 and LW46. We also spent the weekend implementing the VPP DHCPv6 PD client, Stun library DNS64 NAT64 discovery / IPv4 literal synthesizer. We also tried testing applications behind DS-Lite, 464 XLAT and NAT64.

We’ve made a few interesting findings. On the iPhone, the ecosystem which is forcing IPv6-only support, almost everything works. On the laptop, most stuff works. We learned that building these networks is very hard! I mean, we thought IPv6 should just be plug and play. These IPv6 addresses are long to type and synthesizing IPv6 address from NAT64 prefixes was a poor idea, but at least we fixed a buffer overflow bug. Media still works point-to-point, even behind multiple NATs.

Views from Singapore windows.

Results & future of IPv6

We think the future should really be IPv6 plus NAT64, but this puts new requirements on IPv6 hosts. They need to be able to do NAT64 prefix discovery, synthesize IPv6 address from IPv4 literal and have to support local DNS64.

Our work continued on Sunday until 2pm when we stop doing whatever we were doing and the sharing of results begins. Presentation, no longer than 3 minutes, recapping results, lessons learned and recommendations. The video from presentations and awards is available on YouTube.

Our team at the hackathon.

IPv6-IPv4 transition technology interop presentation is available here and NAT64 testing here.

Our team won the “Best Input for the Scotch BoF to the universal deployment of IPv6” award.

Matúš Fabian

Moscow business district under construction @ BIS 2017 Conference in Moscow

At the end of October 2017, I had a chance to visit one of the world’s largest cities – beautiful Moscow, capital of Russia, where the BIS 2017 event took place. BIS – Building Infrastructure Systems – focused on data centers, networks and technologies connected to these topics.

The venue of choice was the Asimut Hotel. It was a fully smoking-free zone, with lots of photos on the walls picturing healthy ways of life.

Moscow business district under construction


BIS 2017 was a very well organized and the timing precise. Everything was on time and easy to find. It was attended by nearly 1000 delegates. Among them were many representatives of businesses and government bodies, highly skilled technical specialists and CxOs managing large companies.

Since the very beginning, I literally had no time to sit down for a while. Such was the number of visitors to our booth. Most of them showed great interest in our company’s scope of work, the level of expertise we provide, projects we participated at. Not only that – there were hundreds of other questions they wanted to ask.

BIS 2017 Moscow servers

Presentation day

At 11:20 of the event day, we had a presentation slot allocated for People were showing great interest in SDN, NFV and IoT technologies. I have had 15 minutes to discuss the latest trends in SDN and NFV and to introduce our company to the audience.

Unfortunately, there was almost no time left for the Q&A part, so I invited everyone to our booth. And people came right after the presentation! Until the very end of the day, people kept coming and asking questions, references and contacts. That was truly amazing!

BIS 2017, Moscow, Pantheon Technologies brochures


I have spoken to people from the Government of Moscow, from financial bodies, telecom and development companies. There were several representatives from largest Russian system integration companies, who were interested in cooperation.

At the same time, it was inspiring to listen to their practical “field” experience and their understanding of the market. The overall impression I had is, that the SDN/NFV technologies are recently being actively researched and tested in Russia. However, significant ROI is still a rare case here. We need more work and time until that point is reached.

BIS 2017, Moscow, robot

My final impression was, that we came to show to Russia just in the right time. There are many interesting projects out there, where our long-term expertise in the field of networking software development may prove useful.

Denis Rasulev