PANTHEON.tech visited Shanghai last week to attend the third annual Huawei Connect. Martin Varga shares some insights from the event.
Activate Intelligence
This year’s theme was Activate Intelligence. Huawei outlined its broad strategy to bring artificial intelligence (AI) to the masses in applications in manufacturing, autonomous driving, smart cities, IoT and other areas. AI will enable billions of new devices to be connected and transfer big data over the network.
The conference was held in Shanghai, China, at the Shanghai World Expo Exhibition Center. Huawei has put a lot of resources and effort into organizing the event which has shown its direct impact on over 26,000 in the attendance. The conference was organized perfectly, to the last detail (exhibition areas, keynotes and conference area, chill-out zones, etc.).
We have witnessed the demonstrations of various smart technologies, ranging from smart city applications, smart education, smart transportation. Smart everything.
One of the most impressive technology demonstrations was an AI that was able to translate Chinese to English and vice versa, as good as a human translator could. Microsoft in cooperation with Huawei states:
“Internal tests have shown, depending on the language, up to a 23 percent better offline translation quality over competing best-in-class offline packs.”
Huawei is also building an AI ecosystem of partners, that is targeted to exceed 1 million developers over the next three years with US$140m.
Networking
We have had some interesting meetings with Huawei’s representatives. It was very pleasant to learn about Huawei’s visions for the near future. We are also glad to share the same vision for an exciting future. Huawei invests heavily into researching new technologies such as AI or IoT, in order to define practical use-cases that can be deployed into their product’s portfolio.
PANTHEON.tech, as a software development company, is strongly focused on computer networking which is related to the Huawei’s vision to integrate AI into managing network operations.
Mr. Yang Jin Director, Network Data Analytics Research Huawei Technologies Co., stated:
“Artificial Intelligence and Machine Learning will abstract data to make next-generation communication breakthroughs come to life.”
Feel free to contact PANTHEON.tech if you have interest in any of the AI, AR/VR, IoT, Intent Driven Network, SDN, NFV, Big Data, and related areas. We can talk about challenges and how we can solve them together.
lighty.io is an SDK that provides components for the development of SDN controllers and applications based on well-established standards in the networking industry. It takes advantage of PANTHEON.tech’s extensive experience from the involvement in the OpenDaylight platform and simplifies and speeds up the development, integration, and delivery of SDN solutions.
lighty.io also enables SDN programmers to use ODL services in a plain JavaSE environment. lighty.io enables a major OpenDaylight distribution vendor to build and deploy their applications faster.
FRINX UniConfig
FRINX UniConfig provides a common network API across physical and virtual devices from different vendors. It leverages an open source device library, which offers connectivity to a multitude of networking devices and VNFs.
The API provides the ability to store intent and operational data from services and devices, enables to commit intent to the network, syncs from the network so that the latest device state is reflected in the controller, compares intended state and operational state and provides device and network wide transactions. All changes are applied in a way that only those parts of the configuration that have changed are updated on the devices.
The UniConfig framework consists of distinct layers, where each layer provides a higher level of abstraction. APIs of the lowest layer provides the ability to send and receive unstructured data to and from devices. The unified layer provides translation capabilities to and from OpenConfig. The UniConfig layer provides access to the intent and the actual state of each device plus the capability to perform transactions and rollback of configurations.
NETCONF devices can be configured via their native YANG models or via OpenConfig. Finally, FRINX UniConfig also provides service modules based on IETF YANG models for the configuration of L2VPNs, L3VPNs and enables the collection of LLDP topology information in heterogeneous networks.
The UniConfig Framework is based on open source projects like OpenDaylight and Honeycomb. It publishes all translation units under the Apache v2 license. Customers and integration partners can freely contribute, modify and create additional device models, which work with the UniConfig Framework.
How did PANTHEON’s lighty.io help?
PANTHEON.tech’s lighty.io helped to make UniConfig run and build faster.
Porting UniConfig to lighty.io required no changes to the application code and has brought many measurable improvements. UniConfig now starts faster, has a smaller memory footprint, and most importantly – significantly reduces build time.
FRINX offers solutions and services for open source network control and automation. The team is made up of passionate developers and industry professionals who want to change the way networking software is created, deployed and operated. FRINX offers network automation products and distributions of OpenDaylight and FD.io in conjunction with support services. They are proud to count service providers and enterprise companies from the Fortune Global 500 list among its customers.
About PANTHEON.tech
PANTHEON.tech is a software research & development company focused on network technologies and prototype software. Yet, we do not perceive networks as endless cables behind switches and routers. For us, it is all software-defined. Clean and neat. Able to dynamically expand and adapt according to the customer’s needs.
We thrive in a world of network functions virtualization and arising need for orchestration. Focusing on SDN, NFV, Automotive and Smart Cities. Experts in OpenDaylight, FD.IO VPP, PNDA, Sysrepo, Honeycomb, Ligato and much more.
Complete automation and full forwarding plane programmability
Private data centers are a hot topic for companies and enterprises that are not willing to push all the data into public clouds. Kaloom Software Defined Fabric™ (Kaloom SDF) is the world’s first fully programmable, automated, software-based data center fabric capable of running VNFs efficiently at scale. This is the first data center networking fabric on the market that provides complete automation and full forwarding plane programmability.
Kaloom approached PANTHEON.tech last year, knowing Pantheon’s intensive and long involvement in SDN, particularly in OpenDaylight project. OpenDaylight (ODL) is a modular open platform for orchestrating and automating networks of any size and scale. The OpenDaylight platform arose out of the SDN movement, in which PANTHEON.tech has expertise and experience. Hence, it was a logical step to utilize this expertise in this project and leverage what has already been done.
Traditional ODL based controller design was not suitable for this job because of the bulkiness of the Karaf based deployments. Kaloom requested amodern web UI, that the vanilla ODL platform does not provide. lighty.io as a component library provides an opportunity to run ODL services such as MD-SAL, NETCONF and YANG Tools in any modern web server stack, and integration with other components like MongoDB.
Architecture
The following architecture is starting to be like a blueprint for SDN applications today. We utilize the best of both worlds:
MD-SAL, NETCONF and YANG Tools from ODL
Updated modern web stack Jetty/Jersey and
MongoDB as a persistent data store.
This is how Kaloom Fabric Manager (KFM) project has started. After several months of customizing development, we have deployed a tailored web application that provides management UI for Kaloom SDF. We have changed and tailored our Visibility Package application to suit Kaloom’s requirements and specifics. This specialized version uses the name of KFM. The architecture diagram above shows details/internals of the KFM and how we interconnect with Kaloom’s proprietary Fabric Manager/Virtual Fabric Manager controller devices.
The solution for physical data centers
lighty.io based back-end of the KFM with NETCONF plugin provides REST services to the Angular UI, which is using our Network Topology Visualization Component for the better topology view visualization and user experience. Using these REST endpoints, it is easy to send specific NETCONF RPC to the Kaloom SDF controllers.
While working on this next-gen Data Center Infrastructure Management software, we have realized that integrating all moving parts of the system is a crucial step for final delivery. Since different teams were working on different parts, it was crucial we could isolate the lighty.io part of the system and adapt it to the Kaloom SDF as much as possible. We have used our field-tested NETCONF device simulator from our lighty.io package to deliver the software which was tested thoroughly to provide stability of the KFM UI.
Kaloom SDF provides a solution for physical data centers administrated by Data Center Infrastructure Provider (DCIP) users. A physical data center can be easily sliced to virtual data centers offered to customers, called virtual Data Center Operator (vDCO) users. The DCIP user can monitor and configure the physical fabrics – PODs of the data center. KFM WEB UI shows the fabrics in topology view and allows updating the attributes of fabric and fabric nodes.
Topology View of Fabric Manager
Advantages for DCIP
The main task of DCIP users is to slice the fabrics to virtual data centers and virtual fabrics. This process involves choosing servers through associated termination points and associating them with the newly created virtual fabric manager controller. Server resources are used through the virtual fabric manager by vDCO users.
vDCO users can use the server resources and connect them via network management of their virtual data center. vDCO can attach server ports to the switches with proper encapsulation settings. After the switch is ready, vDCO can create a router and attach switches to it. The router offers different configuration possibilities to follow vDCO user’s needs: L3 interface configuration, static routing, BGP routing, VXLANs, and many more. KFM offers also topology view of the virtual data center network, so you can check relations between servers, switches, and routers.
Topology View of Fabric Manager
For more details about the KFM UI in action, please see the demo video with NETCONF simulator of Kaloom SDF bellow, or visit Kaloom or Kaloom Academy
In April 2018, the xRAN forum released the Open Fronthaul Interface Specification. The first specification made publicly available from xRAN since its launch in October 2016. The released specification has allowed a wide range of vendors to develop innovative, best-of-breed remote radio unit/head (RRU/RRH) for a wide range of deployment scenarios, which can be easily integrated with virtualized infrastructure & management systems using standardized data models.
This is where PANTHEON.tech came to the scene. We became one of the first companies to introduce full stack 5G compliant solution with this specification.
Just a few days spent coding and utilizing the readily available lighty.io components, we created a Radio Unit (RU) simulator and an SDN controller to manage a group of Radio Units.
Now, let us inspect the architecture and elaborate on some important details.
We have used lighty.io, specifically the generic NETCONF simulator, to set up an xRAN Radio Unit (RU) simulator. xRAN specifies YANG models for 5G Radio Units. lighty.io NETCONF device library is used as a base which made it easy to add custom behavior and 5G RU is ready to stream data to a 5G controller.
The code in the controller pushes the data collected from RUs into Elasticsearch for further analysis. RU device emits the notifications of simulated Antenna Line Devices connected to RU containing:
Measured Rx and Tx input power in mW
Tx Bias Current in mA (Internally measured)
Transceiver supply voltage in mV (Internally measured)
Optional laser temperature in degrees Celsius. (Internally measured)
*We used device xRAN-performance-management model for this purpose.
With lighty.io we created an OpenDaylight based SDN controller that can connect to RU simulators using NETCONF. Once RU device is connected, telemetry data is pushed via NETCONF notifications to the controller, and then directly into Elasticsearch.
Usually, log stash is required to upload data into Elasticsearch. In this case, it is the 5G controller that is pushing device data directly to Elasticsearch using time series indexing.
On Radio Unit device connect event, monitoring process automatically starts. RPC-ald-communication is called on RU device collecting statistics for:
The Number of frames with incorrect CRC (FCS) received from ALD – running counter
The Number of frames without stop flag received from ALD – running counter
The number of octets received from HDLC bus – running counter
*We used xran-ald.yang model for this purpose.
The lighty.io 5G controller is also listening to notifications from the RU device mentioned above.
Elasticsearch and Kibana
Data collected by the lighty.io 5G controller via RPC calls and notifications are pushed directly into Elasticsearch indices. Once indexed, Elasticsearch provides a wide variety of queries upon stored data.
Typically, we can display several faulty frames received from “Antenna Line Devices” over time, or analyze operational parameters of Radio Unit devices like receiving and transmitting input power.
Such data are precious for Radio Unit setup, so the control plane feedback loop is possible.
By adding Elasticsearch into the loop, data analytics or the feedback loop became ready to perform complex tasks. Such as: Faulty frame statistics from the “Antenna Line Devices” or the Radio Unit operational setup
The benefit of this solution is a full stack xRAN test. YANG models and its specifications are obviously not enough considering the size of the project. With lighty.io 5G xRAN, we invite the Radio Unit device vendors and 5G network providers to cooperate and build upon this solution. Having the Radio Unit simulators available and ready allows for quick development cycle without being blocked by the RU vendor’s bugs.
lighty.io has been used as a 5G rapid application development platform which enables quick xRAN Radio Unit monitoring system setup.
We can easily obtain xRAN Radio Unit certification against ‘lighty.io 5G controller’ and provide RU simulations for the management plane.
Visit lighty.io page, and check out our GitHub for more details.
The advantages of deploying lighty.io in Data Center Infrastructure Management (DCIM)
The DCIM market is continuing to evolve and large enterprises continue to be the primary adopters of new DCIM software solutions. The goal of a DCIM software initiative is to provide administrators the ability to identify, locate, visualize, and manage all physical data center assets with a holistic view.
PANTHEON.tech has developed lighty.io based on OpenDaylight in Java SE. It is a great software for implementation of customized DCIM solutions such as SDN controller, NFV orchestrator or VNF management etc.
Some of the great features, you will benefit from while managing your data center are listed below.
Model-driven approach
lighty.io implements a model-driven approach to data center infrastructure management. Because of the common models being used, intercommunication of configurational, operational, monitoring and telemetry data in all the parts of the systems becomes possible which are based on lighty.io.
These models define structure, syntax, and semantics of the data processed by each part of the system. Usage of standardized models by vendors (e.g., models from OpenConfig or IETF) leads to seamless migration from one vendor to another.
Scalability and controller hierarchy
Horizontal scalability – lighty.io supports clustering. A feature, which allows horizontal scaling of the system by adding more instances (nodes) of the controller into a cluster
Controller hierarchy – NB plugins of lighty.io allow the implementation of upper layer applications running as micro services and performing operations using the controller’s NB plugin API. It is also possible to design a hierarchy of controllers where the upper layer controller(s) performs operations using the lower layer controller’s NB plugins. One of the implemented NB plugins is a plugin that implements the NETCONF protocol. Using this NB plugin in the hierarchy of controllers makes possible to manage the lower layer controllers as NETCONF devices.
Security
lighty.io is implemented in Java, which is in nature a Type-Safe programming language. Type safety leads to more secure software than other software written e.g., in C/C++, while reaching a good performance. The model-driven approach and the source code generation also support software security.
These features minimize the possibility of error in the code by implementing the requirement of the verification of the input data from external applications and connected devices. Cyphering, authorization, and usage of these certificates are the matter of course.
Legacy and heterogeneous systems support
lighty.io implements the main SDN standards e.g., NETCONF, RESTCONF, YANG. Moreover, the legacy technologies that are already implemented in lighty.io makes SNMP southbound plugin possible. This shows that the capability of lighty.io being used not only in green-field deployments (implementing the system from scratch) but also brown-field deployments where it is needed to manage a heterogeneous set of networking devices.
Extensibility
As a software design principle, the model-driven approach speeds up and simplifies implementation of extensions with the architecture of lighty.io results in great extensibility. The architecture of the lighty.io defines Northbound – NB and Southbound – SB plugins implementations as a model-driven module.
NB & SB Plugins
NB plugins enable the communication of the controller with the upper layer applications. Such as dashboards, upper layer controllers, interDC orchestrators etc. The upper layer applications can be implemented as an external service or as a native module of the controller.
The upper layer applications mostly implement application logic, business logic, administration interfaces, data analytics, data transformation etc. NB plugins can be used to:
submit commands to the SDN controller,
send notifications to upper layers by the controller,
send telemetry data to upper layers by the controller,
monitor the controller by upper layers,
read the operational data of the controller and devices orchestrated by the controller,
the configuration of the controller itself or specific device orchestrated by the
controller.
SB plugins implement protocols and technologies extending the SDN controller capabilities with new standards and technologies allowing connections of new network devices. SB plugins can be used for:
the configuration of networking devices,
fetching operational (state) data of the networking devices,
receiving telemetry data,
monitoring of devices,
submitting commands to the devices,
receiving notifications from devices.
Models and model-driven approach simplify the implementation of new plugins and upper layer applications because the usage of these models allows source code generation of classes (OOP construct) and related code which verifies the syntax and semantics of the data minimizes the probability of errors in implementation caused by human interactions.
If you would like to know more about lighty.io and how it could improve your business, visit lighty.io or our Product Page.
PANTHEON.tech had developed a network topology visualization component. Its main purpose is to develop a responsive and scalable front-end network topology visualization application on top of the lighty.io. The topology visualization component enables you to visualize any topology on any device with a web browser. It will also be included within the lighty.io distribution package.
We, as a successful software development company, were compelled to create our own solution based on the technologies we know and like to use. Other existing commercial applications fail to cover the visualization of the network topology sufficiently.
The experience of the development of Visibility Package, which is a software component, used to gather and visualize network topology data from different networks, network management systems, and cloud orchestrators, led PANTHEON.tech developers to create a better solution. Using this, the network topology visualization component will significantly reduce your time spent on development.
We have developed the topology visualization component as an Angular component, which can be used in Angular applications to create network visualization applications. Thanks to its modularity, customizability the network visualization component can visualize any network from small company networks to large-scale data centers with thousands of nodes and links.
Picture(1): A screenshot of a spine-leaf network visualization sample.
As every use case’s demands, requirements, and scale widely differ from each other, a scalable and universal component was needed. That is why we have based the topology visualization component on the Angular framework, which allows rapid development of responsive, modular and scalable applications.
Our previous experiences showed us that SVG technology for topology visualization is not performing well with very large network topologies. That is why we decided to use HTML5 Canvas instead. Canvas provides seamless animations and has great responsiveness even with thousands of nodes and links.
Some of the great features of the topology visualization component are
Ease of use
The topology visualization component includes extensive documentation and examples to help the developer while application creation. With Angular CLI, a basic application can be set up in minutes.
Customizability
The basic application could easily be customized to the desired state. We have developed the topology visualization component with customization in mind.
Modularity
The topology visualization component is developed as separate modules. The developer can decide and use which modules are needed for a particular project and add other modules whenever they are required.
Speed & Responsiveness
Angular and HTML5 Canvas is used to ensure even with large amounts of data the application will be running effortlessly.
Scalability
The topology visualization component works with small network topology with few nodes and links but truly shines with large-scale topologies. We are continually adding new features based on our client’s requests and needs. Watch this space out for many exciting features to be announced in the near future.
lighty.io is a Software Development Kit (SDK) which provides components for the development of Software Defined Networking (SDN) controllers, based on commonly used standards in the networking industry. We have used our experience from the OpenDaylight (ODL) to create lighty.io, which will empower you to simply develop, integrate and deploy a tailored SDN controller.
An SDN controller plays an essential role as an orchestrator of networking infrastructure in 5G networks. It is used not only for the configuring and monitoring of the physical routers and switches, but also for managing virtual networks of Virtual Machines (VMs) and containers. Among many great benefits of an SDN controller (or set of interconnected SDN controllers) is that it has a holistic view of the network. An SDN controller is also used for connecting User Equipment (UE) or Customer Premise Equipment (CPE) to data centers and enables technologies such as network slicing and edge computing to be used in the 5G.
Network slicing requires the ability of configuration and monitoring of all networking devices (physical or virtual) along the path of the traffic. For edge computing purposes, it is necessary to automate the configuration of the devices in order to support 5G scenarios such as UE registration. The SDN controller enables technologies such as network slicing and edge computing to be used in 5G.
Figure 1: Overview of a 5G network architecture
Figure 1 (above) shows how the SDN controller based on lighty.io uses southbound plugins to read and write configuration and state of networking devices of WAN network and physical or virtual networks in data centers both core and at the edge.
lighty.io supports many south-bound protocols for network orchestration, such as NETCONF and RESTCONF protocol plugins. The number of vendors and devices supporting these protocols grow every year. We believe that many devices and appliances in Radio, Edge, and WAN will speak these protocols in the 5G era. lighty.io also contains Pantheon’s SNMP SB plugin for integration with legacy systems, and for heterogeneous environments where the old and the new mix.
The modular architecture of lighty.io allows adding new plugin implementations to other protocols. lighty.io exposes the configurational and operational data of all the devices to an upper layer where a business logic of administration and automation applications can be implemented. The APIs can also be accessed remotely via the REST API and other communication methods can also be implemented as northbound plugins. These upper layer applications can be designed as micro services or as a part of the SDN controller.
Figure 2: An example of a 5G network using FD.io data plane
As mentioned above, it is necessary to use an SDN controller also for orchestration of virtualized networks in data centers. An open source project FD.io is one particular example of using such technology. FD.io implements configurable data plane running in user space level, not in kernel space level. Thanks to this feature, the FD.io data plane can be deployed as an ordinary micro service e.g., as a container. FD.io can be used for interconnection of containers or VMs in data centers and it is possible to orchestrate all of the instances of FD.io by lighty.io based SDN controller.
Figure 3: An example of a 5G network and integration with other IoT networks
Among connecting mobile phones and tablets to the network, 5G will also enable a vast number of Internet of Things (IoT) devices to be connected to the internet and to communicate directly with each other. IoT solutions can leverage SDN controllers for similar purposes as other 5G technologies do. Specific VNFs for IoT can be deployed and orchestrated by an SDN controller, whether that be at the edge or in the core data centers. Network slicing could be used for smart cars and smart cities solutions as it is shown in Figure 3(above)
This way the 5G networks will enable adoption of IoT in everyday human life. The number of IoT devices expected to connect to internet in upcoming years is substantial. According to Gartner’s predictions, IoT technology will be in 95 percent of electronics by 2020 [1]. According to another forecast from Cisco, 50 billion devices would connect to the internet by 2020 [2].
Here is a brief summary of features and benefits provided by lighty.io:
The modular architecture of southbound plugins allows implementation of communication with physical and virtualized networking devices.
Configurational and operational data of all orchestrated devices is exposed as a northbound plugin for administration, automation and analytics purposes.
MD-SAL (Model Driven Software Abstraction Layer) – provides data store and services to be used by other parts of SDN controller such as southbound and northbound plugins. The data processed by MD-SAL are modeled in YANG modeling
NETCONF and RESTCONF southbound plugins are available and field-tested.
SNMP plugin for integration with legacy systems is also available.
NETCONF protocol can be used by lighty.io for orchestration of FD.io data plane to interconnect VMs or cloud-native applications in data centers.
lighty.io has a light weight hardware footprint, hence responds promptly.
PANTHEON.tech had a unique opportunity to participate at the Open Networking Summit (ONS) 2018. The central topic of the ONS 2018 was data center solutions: ONAP and Kubernetes based systems. Also, several new projects under the wings of Linux Foundation were introduced. For example “Arkaino Edge stack” and DANOS (Disaggregated Network Operating System project) which is the operating system for white-box switches.
PANTHEON.tech has traditionally participated in the OpenDaylight (ODL) as well as the fd.io development and we launched our lighty.io product in the ONS. lighty.io changes conventional OpenDaylight attitude on how to build SDN controller applications, making them smaller, nimble and micro-service ready.
lighty.io caught the attention of the OpenDaylight community members, as well as customers struggling with real-life OpenDaylight deployments. This solution helps to consume and deploy OpenDaylight services faster, with a lower cost of ownership. Faster builds, quick test runs and smaller distribution sizes are the right way to proceed. lighty.io brings also added value into the ONAP eco-system providing runtime for ONAP’s SDN-C. We are continuously updating the community with lighty.io use-case examples and also lighty.io video use-cases
One of the projects, in which we participate in the community, is The Fast Data Project (FD.io). For the FD.io community, we presented Ligato; Honeycomb’s younger brother. It is an ’easy to learn and easy to use’ integration platform. We love to see, that the FD.io community is growing larger, not only in the number of contributors but in the number of projects and use-cases as well.
We were also pleased to accept an invitation to an introduction of a new FD.io project“Dual Modes, Multi-Protocols, Multi-Instances” (DMM), where we discussed use-cases and integration paths from the current networking stack. FD.io community has the potential of further growth, especially as we see the shift of the networking industry from closed-source, hardware-based network functions to an open-source software-based solution.
ONS 2018 was an exciting opportunity for us. It was a forum where we could easily share our knowledge and provide a much-needed innovation. Let’s see how artificial intelligence and machine learning will change the landscape of networking in the upcoming years. See you at the next ONS event!
Thanks to PANTHEON.tech, I had an opportunity to attend PyCon SK conference that took place on March 9 – 11, 2018 in Faculty of Informatics and Information Technologies of Slovak University of Technology, Bratislava. Its intent was to promote Python, spread open source technologies and open source ideas. Speakers were professionals from various areas of software development – from documentation writers through big data analysts to coders as such. Thus, the lectures covered a wide area of topics and possibly anyone could have found their cup of tea.
Friday, 9 March
The day started with Alex Ellis’s talk about OpenFaaS (Functions as a Service). He introduced the OpenFaaS project, made an account on how to build one’s own serverless functions in containers using Docker, or Kubernetes, or other orchestrators through the extensible architecture. In the talk, practical demonstrations of the use of serverless functions were made, such as voice-driven getting of information on weather and other stuff, turning black-and-white pictures to colourful in one click, etc.
Later on talks continued with Mikey Ariel, also known as That Docs Lady. She talked about docs and the community. In her talk, she pointed out various types of project documentation – from READMEs, through quickstart tutorials, to error messages. The talk introduced or re-acquainted us with topics such as content strategy, docs-as-code, optimized DevOps for docs, and contribution workflows. One of many witty observation she made was: “Instead of documenting a bunch of bugs, why not to fix them?!”
Saturday, 10 March
For me, personally, Saturday provided few highlights.
Anton Caceres talked about big data analysis, and libraries and tools that Python provides in this area of programming. What he emphasized as core skills of data scientists were ability to read data, to visualize it, to formulate right questions, and to endorse one’s imagination while answering those questions by visual presentation of the data.
Another interesting one was by Michael Kennedy. The topic was “Pythonic code, by example”. He explained the concepts of writing idiomatic code in Python (i.e. Pythonic code) that is most aligned with the language features and ideals. This talk took us on a tour of some of the more important pythonic concepts using many examples of perfectly functional Python code that was non-pythonic with pythonic equivalents. Most of the code examples were written in Python 3.5.
Ryan Kirkbride gave the last talk of the day; or better said a performance. He suggested that while coding is mostly quite a lonely activity in which a coder interacts with the program, there is also a way to make coding an interactive activity shared with a community. He himself provided an example by live coding a program that generated music. The idea of sharing an experience of coding with others underlined the idea behind the conference – collaboration, sharing and community.
Sunday, 11 March
On Sunday, we had a look at end-to-end testing of UI of the application. Vladimir Kopso spoke about writing an end-to-end testing automation Framework and some tips for making the code cleaner. He also spoke about parallel running of multiple test suites in Docker containers and time saving this approach brought to running automation test suites.
Tibor Arpáš presented his ideas on how to make writing code in various IDEs more efficient and how to give the coder valuable information on their code. He suggested that when running a code, valuable information is created about the code itself. He came up with few ideas on how to display this information together with the code at one place.
To sum it up, in three days which were full of Python and open source topics, we learned a lot from the speakers. Some of them were better, some of them a bit boring, but there were few that were highly motivating and engaging. Community was the leitmotif that appeared across almost all of them and was apparent also in the overall atmosphere of openness in the hallways, where you could address speakers and discuss with them.
Big thanks to PANTHEON.tech and to the organizers of PyCon SK 2018 for this amazing experience.
OpenDaylight’s YANG Tools project, forms the bottom-most layer of OpenDaylight as an application platform. It defines and implements interfaces for modeling, storing and transforming data modeled in RFC7950, known as YANG 1.1 — such as a YANG parser and compiler.
What is YANG Tools?
Pantheon engineers started developing yangtools some 5 years ago. It originally supported RFC6020, going through a number of different versions. After releasing yangtools-1.0.0, we introduced semantic versioning as an API contract. Since then, we have retrofitted original RFC6020 meta-model to support RFC7950. We also implemented the corresponding parser bits, which were finalized in yangtools-1.2.0 and shipped with the Nitrogen Simultaneous Release.
This release entered its development phase on August 14th 2017. yangtools-2.0.0 was released on November 27th 2017, which is when the search of an integration window started. Even though we had the most critical downstream integration patches prepared, most of down-streams did not have their patches even started. Integration work and coordination was quickly escalated to the TSC. The integration finally kicked off on January 11, 2018.
Integration was mostly complicated by the fact that odlparent-3.0.x was riding with us, along with the usual Karaf/Jetty/Jersey/Jackson integration mess. It is now sorted out, with yangtools-2.0.1 being the release to be shipped in the Oxygen simultaneous Release.
What is new in yangtools-2.0.1?
309 commits
2009 files changed
54126 insertions(+)
45014 deletions(-)
The most user-visible change is that in-memory data tree now enforces mandatory leaf node presence for operational store by default. This can be tweaked via the DataTreeConfiguration interface on a per-instance basis, if need be, but we recommend against switching it off.
For downstream users using karaf packaging, we have split our features into stable and experimental ones. Stable features are available from features-yangtools and contain the usual set of functionality, which will only expand in its capabilities. Experimental features are available from features-yangtools-experimental and carry functionality which is not stabilized yet and may get removed — this currently includes ObjectCache, which is slated for removal, as Guava’s Interners are better suited for the job.
Users of yang-maven-plugin will find that YANG files packaged in jars now have their names normalized to RFC7950 guidelines. This includes using the actual module or submodule name as well as capturing the revision in the filename.
API Changes
From API change perspective, there are two changes which stand out. We have pruned all deprecated methods and all YANG 1.1 API hacks marked with ‘FIXME: 2.0.0’ have been cleared up. This results in better ergonomics for both API users and implementors.
yang-model-api has seen some incompatible changes, ranging from renaming of AugmentationNode, TypedSchemaNode and ChoiceCaseNode to some targetted use of Optional instead of nullable returns. Most significant change here is the introduction of EffectiveStatement specializations — I will cover these in detail in a follow-up post, but these have enabled us to do the next significant item.
YANG parser has been refactored into multiple components. Its internal structure changed, in order to hide most of the implementation classes and methods. It is now split into:
yang-parser-impl (being the default-configured parser instance)
and a slew of parser extensions (RFC6536, RFC7952, RFC8040)
There is an yang-parser-spi artifact, too, which hosts common namespaces and utility classes, but its layout is far from stabilized. Overall the parser has become a lot more efficient, better at detecting and reporting model issues. Implementing new semantic extensions has become really a breeze.
YANG Codecs
YANG codecs have seen a major shift, with the old XML parser in yang-data-impl removed in favor of yang-data-codec-xml. yang-data-codec-gson gains the ability to parse and emit RFC7951 documents. This allows RFC8040 NETCONF module to come closer to full compliance. Since the SchemaContext is much more usable now, with Modules being indexed by their NameModule, the codec operations have become significantly faster.
Overall, we are in a much better and cleaner shape. We are currently not looking at a 3.0.0 release anytime soon and can actually deliver incremental improvements to YANG Tools in a much more rapid cadence than previously possible with the entire OpenDaylight simultaneous release cycle being in the way.
We already have another round of changes ready for yangtools-2.0.2 and are looking forward to publishing them.
At the end of November 2017, a very special talk took place at Banská Bystrica’s Matej Bel University. Within the broader “Extrapolations and the Scientific Colloquium” program, a lecture featuring the legend of Czechoslovak computing and father of the PMD-85, Roman Kišš, took place. Why is he a legend and why was it a must for me to see him talk, even though I only received the invitation for the event three hours before its launch?
Roman Kišš is the inventor of the most successful Czechoslovak computer of the 1980s, the PMD-85. He has also developed its Didaktik Alfa clone. In case you attended an elementary or secondary school, or the youth Pioneer organization in 1980s communist Czechoslovakia, you definitely must have had a close encounter with a PMD.
An 8-bit computer, built by Tesla Piešťany using the MHB 8080A processor, it was a clone of the Intel 8080. With 48KB RAM and 4KB ROM, it was considered ahead of its time. In spite of consisting of low-quality components, its performance was unmatched.
My first ever experience with a computer in the 1980s, was with a PMD. Roman Kišš’s work, from a technological point of view, was on par with what Jobs and Wozniak had done in the US.
When I had the chance to go see Mr. Kišš’s lecture, I could not have refused. Nostalgia, curiosity, and the almost mystical aura encompasses his personality.
The lecture was divided into two segments:
PMD-85 and how it came to life
Microsoft Azure
I was mainly curious about the PMD-85-focused segment.
During the first segment, Roman Kišš discussed how things worked in communist Czechoslovakia (or, how nothing worked). Stores had no supplies, nothing was in stock and anything you were able to lay your hands on was either rubbish, or stolen from somewhere.
There was a popular saying that if you stand out of the crowd, your head will be chopped off. Or, as a late 80s punk song recommended, everyone shall write with a blue pen. Look the same, behave the same, and do not deviate from the crowd. Unfortunately, many of these habits still persist, especially one that has become a part of our folklore: do only what we are told to. This is also called the “zero fails given approach.”
Mr. Kišš talked a lot, but, unfortunately, not enough about technicalities regarding the PMD. He discussed organizing his work, research and people, which was of great value to me. He talked for over an hour and even though he swamped us with information, it was not even a tenth of what he’d want to say.
For me, the main takeaways were three messages that I’ve been thinking about for weeks to come.
01: You need to leave. You’ve outgrown us.
When Roman Kišš reached the stage that everybody in Czechoslovakia wanted a PMD, his head of team at Tesla Piešťany had a chat with him.
“Roman, we’ll need you to leave. You’ve outgrown us.”
To this, Roman‘s reply was brief,
“It’s your fault that you haven’t moved an inch!”
I could immediately imagine a young enthusiast, not really fitting the “zero fails given” environment. The main problem was, that they could not afford to employ him, unless he was supposed to be a department of his own. Without them as his colleagues. Of course, you would not want to employ a colleague who turned everyone into his enemies by achieving something within several months, that others had been struggling with for years without any results.
With the money Mr. Kišš had earned for patents and sales of older PMI 80 computers, he was able to put together enough of his own resources, to fund a team of enthusiasts who had helped him with prototyping. What were his objectives? Motivating people with potential and willing to work.
He built an exclusive club of co-workers, which a number of people wanted to join. He paid for team buildings in exclusive restaurants, keeping open tabs. Even though, looking back, it might look like PMD-85 was an achievement of an individual, it was, in fact, the achievement of a team. The PMD-85 computer was a proof of concept which needed transforming into a product. Kišš knew this and he did everything that could have been done.
He managed to build a team which was much better and stronger than the communist economic model, based on five-year plans, could imagine, even in its representatives’ wildest dreams full of shots fired at Saint Petersburg’s Winter Palace. He’d done everything he could so that the team could continue growing. Taking trainings and improving their education. He had a clear target and kept focused at achieving it. A good team leader keeps his target in the cross-hairs.
02: You can’t be both a good father and a great professional
This sentence came together with an explanation: you can’t be perfect doing both at the same time. You can’t be completely devoted to both your work and your family. One of them will always be sidelined. Mr. Kišš admitted that he didn’t spend enough time with his family as he spent almost all of it at work. This made me think – what has changed compared to the 1980s?
Team work is one of the most important soft skills, yet you come out of school without ever having heard of it. We have better access to better information. We have the tools and procedures how to learn better and faster. We’ve got everything we need, but is that enough? Most probably not.
Having the means but lacking motivation is worse, than not having the means at all. We primarily need motivation to work hard – this was true then as it is now. However, everything is a matter of scale: do I work hard because I want to improve myself and advance the team, or do I work hard because I always want to be the best?
In the first case, you are cooperation-oriented, leaving enough room for both being a good father and a great professional. However, the second case is strongly competitive and leaves room for nothing else; the drive to be the best always needs someone to compete with.
And now for the philosophical question: is it better to be a strong member of a strong team which would also be able to thrive without a specific individual, or be the dominant member having a fully dependent team, which, if losing the dominant member, ceases to exist? I’d go for being a strong member of a strong team.
What about you, dear reader?
03: Money should never be your goal, only the means for reaching one
As I already mentioned, Roman Kišš spent a lot of his own resources on materializing his ideas. He spent it on people, literature, electronic components, and whatever he currently needed. Making money has never been his goal. As he mentioned, he received only 4 Kčs per each o125 000 pieces sold of the PMDs-85.
He also earned a little designing the Didaktik Alfa computer for Didaktik Skalica. He’s invested all the funds into moving his projects forward; and to live off during his emigration period in Canada. This was after he had realized, there is no room for hos further growth in Czechoslovakia. Also, no one wanted to employ him any more, but that’s a different story.
After relocating to Canada, he had to start from scratch. He’d been doing a semi-legal PhD. This means, he had done everything other PhD students were doing at the university, but without receiving a salary. What was his reward? The professor who led his research arranged that Kišš could attend all the lectures and take all the exams. Almost a normal university study – without receiving a diploma at the end.
His motivation was purely about acquiring knowledge. However, he did not hesitate and accepted: after you’ve reached certain skills, no one is interested in what you’ve studied, only in what you know. Your knowledge is the only thing you truly own. Roman Kišš’s knowledge and skills have helped him reach much more net worth than those 500 000 Czechoslovak crowns he spent for his diploma-less studies.
Here I have to ask myself: what’s the sum of all my knowledge when Google has an outage? It may be an over-used phrase, yet I truly believe that this gentleman is a living example that everyone should do what they consider meaningful, not what makes them a fortune. Do your best and money will come.
…and back to PMD-85
The PMD-85 computer is a piece of technology holding a very special place in my life. It’s primarily a personal nostalgia, as it was the first computer I got as a third grader. My father built it from components that he honorably stole, which was the standard way of acquiring most possessions in a socialist economy.
I started learning BASIC first, later switching to Pascal at Banská Bystrica Pioneer organization, which was, by the way, located in the same building where PANTHEON.tech has its Banská Bystrica office now. Later on, a second piece was added to my private collection.
I took them both to Roman Kišš’s lecture to meet their creator. I got them both signed. And I thanked him for PMD-85 being responsible for my career, for doing stuff that I truly like, for living.
Mr. Kišš seemed to be happy, and so am I. Thanks to Mr. Kišš, PMD-85 and my father.
At the beginning of December 2017, we attended the KubeCon & CloudNativeCon 2017 conference in Austin, Texas. The conference, organized by the Linux Foundation, brought together leading contributors in cloud native applications and computing, containers, micro-services, central orchestration processing and related projects.
More than four thousand developers, together with other people interested in cloud-native technologies, visited the event in Austin. The growing number of attendees is a testimony to the rising importance of Kubernetes and containerized applications for companies of all sizes.
The schedule was full of talks about various CNCF technologies such as Kubernetes, Prometheus, Docker, Envoy, CNI and many others. “Kubernetes is the new Linux,” pointed out Google’s Kelsey Hightower in his keynote, predicting bright future for these technologies.
In addition to talks, the sponsors at KubeCon showcased their projects in a huge exhibition hall. The FD.io booth presented a project our friends from Cisco contributed to – VPP centric network plugin for Kubernetes which aims to provide the fastest connectivity for containers by bypassing the kernel network stack. During the presentation of the project, we were involved in many conversations with attendees from various companies, which proves their interest in the solution.
The IETF 100 Hackathon wrapped up several weeks ago in steamy Singapore. Over two hundred participants spent the weekend on November 11th – 12th discussing, collaborating and developing sample code, solutions and ideas that show practical implementations of IETF standards. The theme was IPv4-IPv6 Transition Technology Interop. We, at PANTHEON.tech, had to be part of it.
It goes on between two characters, one of whom is an IPv6 proponent while the other one really admires NATs: and that was our team. We wanted to test, if the “new” Internet would run on IPv6 plus NAT64, or whether we can keep the “old” Internet working forever through the IPv4 address sharing mechanisms.
The room started to fill quickly after the doors opened. We displayed a poster that introduced the project and after a brief kick-off presentation got to work. Our table, full of power outlets, switches, gateways, routers and patch cables, attracted the most interest among the hackathon participants.
Transition technology interop.
Testing and findings
The hackathon was the first opportunity for interop testing of VPP DS-Lite AFTR as well as NAT64 and LW46. We also spent the weekend implementing the VPP DHCPv6 PD client, Stun library DNS64 NAT64 discovery / IPv4 literal synthesizer. We also tried testing applications behind DS-Lite, 464 XLAT and NAT64.
We’ve made a few interesting findings. On the iPhone, the ecosystem which is forcing IPv6-only support, almost everything works. On the laptop, most stuff works. We learned that building these networks is very hard! I mean, we thought IPv6 should just be plug and play. These IPv6 addresses are long to type and synthesizing IPv6 address from NAT64 prefixes was a poor idea, but at least we fixed a buffer overflow bug. Media still works point-to-point, even behind multiple NATs.
Views from Singapore windows.
Results & future of IPv6
We think the future should really be IPv6 plus NAT64, but this puts new requirements on IPv6 hosts. They need to be able to do NAT64 prefix discovery, synthesize IPv6 address from IPv4 literal and have to support local DNS64.
Our work continued on Sunday until 2pm when we stop doing whatever we were doing and the sharing of results begins. Presentation, no longer than 3 minutes, recapping results, lessons learned and recommendations. The video from presentations and awards is available on YouTube.
At the end of October 2017, I had a chance to visit one of the world’s largest cities – beautiful Moscow, capital of Russia, where the BIS 2017 event took place. BIS – Building Infrastructure Systems – focused on data centers, networks and technologies connected to these topics.
The venue of choice was the Asimut Hotel. It was a fully smoking-free zone, with lots of photos on the walls picturing healthy ways of life.
Organization
BIS 2017 was a very well organized and the timing precise. Everything was on time and easy to find. It was attended by nearly 1000 delegates. Among them were many representatives of businesses and government bodies, highly skilled technical specialists and CxOs managing large companies.
Since the very beginning, I literally had no time to sit down for a while. Such was the number of visitors to our booth. Most of them showed great interest in our company’s scope of work, the level of expertise we provide, projects we participated at. Not only that – there were hundreds of other questions they wanted to ask.
Presentation day
At 11:20 of the event day, we had a presentation slot allocated for PANTHEON.tech. People were showing great interest in SDN, NFV and IoT technologies. I have had 15 minutes to discuss the latest trends in SDN and NFV and to introduce our company to the audience.
Unfortunately, there was almost no time left for the Q&A part, so I invited everyone to our booth. And people came right after the presentation! Until the very end of the day, people kept coming and asking questions, references and contacts. That was truly amazing!
Networking
I have spoken to people from the Government of Moscow, from financial bodies, telecom and development companies. There were several representatives from largest Russian system integration companies, who were interested in cooperation.
At the same time, it was inspiring to listen to their practical “field” experience and their understanding of the market. The overall impression I had is, that the SDN/NFV technologies are recently being actively researched and tested in Russia. However, significant ROI is still a rare case here. We need more work and time until that point is reached.
My final impression was, that we came to show PANTHEON.tech to Russia just in the right time. There are many interesting projects out there, where our long-term expertise in the field of networking software development may prove useful.
PANTHEON.tech was part of the Open Networking User Group (ONUG) 2017 in New York. The conference were held from October 17th, until the 18th.
ONUG Highlights & Insights
ONUG belongs to the group of conferences rather smaller in size, but surely not in importance. This year it took place in New York. The Big Apple is a truly interesting place and so was the conference. This event was a combination of trade show and a panel discussion.
Pantheon Technologies did not actively participate in the trade show part this time, as our focus was more on potential business hunting.
ONUG is a 2-day event fully packed with big names on stage, as part of panel discussions, and a good selection of vendors, community leaders, service and solution providers.
The conference includes keynotes from IT business enterprise leaders as they address their open software-defined cloud-based infrastructure journeys, updates from the Working Group Initiative members, hands-on tutorials and interactive labs, real world use cases, proof of concept demonstrations and a vendor technology showcase.
The goal of all ONUG events and initiatives is to bring together the full IT community, allow IT business leaders to:
learn from peers
make informed open infrastructure deployment decisions
open up a dialogue between the vendor and user communities, in order to collectively drive open infrastructure
We are looking forward to ONUG 2018
For Pantheon Technologies, this was a good opportunity to understand current networking needs of service providers, enterprises and vendors. This helps us to improve promoting Pantheon even better in the field of our expertise, in customized software development. ONUG clearly showed, that service providers are heading more and more towards SD-WAN solutions.
We have discussed our expertise in SDN and NFV with almost all of the ONUG participants. We have also found several potential partners to explore this exciting business with. Software Defined Networking is not only a buzzword anymore, it’s been well established and the market is very competitive, especially in the US territory.
That is why we at Pantheon Technologies needs to be on top of it.
This year, our colleagues from PANTHEON.tech visited quite a couple of tech events around the globe. Among them, the SDN NFV World Congress, taking place in Hague, was one we definitely couldn’t have missed.
As one of the largest conferences focused at network transformation, it attracted more than 1700 visitors from companies all over the world. And it weren’t only large companies, many of whom are among our long-term clients; a fairly large number of start-ups joined in order to present their solutions.
Intent-based Networking: Still not in sight
It’s thrilling to follow the gradual transformation of proprietary solutions into those based on open-source. The reason is simple: at Pantheon Technologies, we contribute into several open-source projects, as we firmly believe that it’s the only way to ensure interoperability and standardization of individual building blocks of SDN and NFV solutions.
Software-defined networking is still under development. Until the present, most use-cases have only been dealing with automation. The bottom line is that it’s still a HDN – human-defined network. It’s still people who express the desired state of the network, it’s not done by a software.
Therefore, after solving the issues with automation and interoperability of the building blocks, a new adventure from the intent-based networking world might await. The current SDN solutions offered by the market, will only provide the infrastructure to be used to fulfill the network users’ intentions.
During the week which we spent at the conference, we’ve had plenty of interesting discussions, both sales-oriented and technical. Now, we’re very much looking forward to further meetings and discussions.
Looking for customers and partners in new markets is an essential part of a diversification strategy. New markets bring new opportunities, new insights, needs and challenges. Hence, at the beginning of this October. With my colleagues Denis and Robert, we traveled to Singapore in search of all of the above-mentioned.
We’ve anticipated finding it all at the huge TechXLR8 event, sponsored by PANTHEON.tech, which comprised of smaller happenings: 5G Asia, IoT World Asia, NV & SDN, the AI Summit and Project Kairos Asia. Being the Silver Sponsor at such a vast event was a brand new experience for us.
We’ve spent two days discussing SDN and networking, introducing Pantheon Technologies and our products to the representatives of Asian market. We also had an opportunity to take part in a panel discussion on NFV MANO interoperability and how it fits into the open source world along with related standardization being done by ETSI.
This discussion, more than anything else, showed our presence to other attendees. So, we talked, smiled and explained. People were interested in Visibility Package which we have demonstrated. They asked a lot about the company and our contribution to OpenDaylight, as well as other open source projects we are part of, or have experience with.
SDN, OpenDaylight and the others
Pantheon Technologies was not the only company promoting OpenDaylight-related solutions. Official OpenDaylight members were present, as well as other companies and groups offering their ODL based solutions. We have received several offers for cooperation from several company representatives advertising their ODL and SDN-related skills. This clearly indicates the importance of the OpenDaylight project.
IoT is the word
Despite TechXLR8 being crowded with companies presenting different IoT solutions and despite having our booth placed at NFV/SDN area, we have received a great number of IoT-related questions. We talked about IotDM as of oneM2m compliant data broker for ODL. For some people, oneM2M was just another buzzword. They were frequently asking about specific use cases related to the IoT field. Our question, “what do you need?” still hangs there waiting to be answered. Asia seems to be searching for its answer on what IoT stands for. There are open opportunities for us to help finding an answer for this question.
Man in the middle
Along all the companies presenting their products, skills or ecosystems, there was one special group of people present. They usually introduced themselves as “the company that represents telco in Asia.” Who were these people?
Asian markets are quite different from what we have experienced so far – in a way how companies search for partners and how partnerships are being built. There are many companies acting as matchmakers. It seems that a significant number of telco companies don’t actively search for partners, but rely on matchmakers. Matchmakers actively seek solutions or vendors, who might match their telco customers. What do matchmakers have to say about their customer’s expectations?
All of them had pretty much the same answer. We need to approach companies with our solutions and make them think it is what they need. As if only thing market is looking for was advantage over competitors. Whatever solution will make that happen.
Even that we can’t honestly say there is a market driving vision missing, it for sure feels that way. Presence of buzzwords without focus on specific case indicates that Asian telco and IT market has evolved differently as markets we use to operate.
Hic abundant leones
The best way to describe our first encounter with the Asian market is mapping terra incognita, the unknown land, a place where lions are. We’ve made the first step towards the unknown and have found some potential partners on the way. Now we have to figure out how to turn the first contact into a working partnership and collaboration.
We need to find a set of use cases, to show to potential customers in Asia, but we aren’t quite sure what to show and whom to show it. Finding that out is our next goal. Find use case to make a showcase of and find audience for it. For that, we need to flood the matchmakers we already know and also keep looking for new ones.
Lesson learned
Are our solutions tailored to fulfill specific needs? Indeed they are. Do our solutions bring variety and scalability? Definitely. Can we deliver? Yes we can. Next time, we have to show that more explicitly. We need to prepare showcases that would amaze people.
We need to find equilibrium between our skill and the market’s desire for buzzwords. It does not need to be product quality, does not even need to be a product by itself. It just needs to show – hey, we are the right ones.
Our journey to Singapore was a success. The journey to Asian markets has just begun. It is our job to make the most out of it.
In this short article, I would like to share our experience in the field of integrating VPP and Honeycomb, and about the extension of VPP services. Among our colleagues are many developers who contribute to both projects, as well as people who work on integrating these two projects. These developers also work on integrating them with the rest of the networking world.
Let’s define the basic terms.
What is VPP?
According to its wiki page, it is “an extensible framework that provides out-of-the-box production quality switch/router functionality”. There is definitely more to say about VPP, but what’s most important is that it:
provides switch and router functionality
is in production quality level
is platform independent
“Platform independent” means, that it is up to your decision where you will run it (virtualized environment, bare-metal or others). VPP is a piece of software, which is by default spread in the form of packages. Final VPP packages are available from the official documentation page. Let’s say we decide to use stable VPP in version 17.04 on a stable Ubuntu version 16.04. You can download all available packages from the corresponding Nexus site. If there is no such platform available at Nexus, you can still download VPP and build it on the platform, which you need.
VPP will process packets, which flow in your network similarly to a physical router, but with one big advantage: you do not need to buy a router. You can use whatever physical device you have and just install the VPP modules.
What is Honeycomb?
It is a specific VPP interface. Honeycomb provides NETCONF and RESTCONF interface on northbound and stores required configuration (in form of XML or JSON) in local data store. There is also the hc2vpp project, which calls the corresponding VPP API as reaction to a new configuration stored in data store.
In VPP, there is a special CLI that is used to communicate with VPP. It is in text form (similarly as in OS). To make it easier to use VPP, we also have Honeycomb. It provides an interface, which is somewhere between a GUI and a CLI. You can also request VPP state or statistics of via XML, and you will get the response in an XML form. Honeycomb can be installed in the same way as VPP, through packages, which can be accessed from the Nexus site.
Where can the combination of VPP and Honeycomb be used?
We’ve already showcased several use cases on our PANTHEON.tech YouTube channel:
Another alternative is to use the two as vCPE (Virtual Customer Premises Equipment) as specified in this draft. One of projects which wants to implement it is ONAP. VPP used as vCPE-endpoint for the internet connection from a provider. According to this use case, vCPE should provide several services. In standalone VPP, such services aren’t supported, but they still can be added to a machine where VPP is running. For demonstration, we have chosen DHCP and DNS.
DHCP
In this case, we have two VMs. VM0 simulates the client side (DHCP client) which wants IP address to be assigned to interface enp0s9. VM1 contains VPP and a DHCP server. The DHCP request is broadcasted via enp0s9 at VM0 to VPP1 via port 192.168.40.2. VPP1 is set as proxy DHCP server and DHCP request message is forwarded to 192.168.60.2, where the DHCP server will response with a DHCP offer. Finally, after all DHCP configuration steps are done, interface enp0s9 at VM0 is configured with IP address 192.168.40.10.
DNS
In this case, we also have two VMs. VM0 simulates the client side (DNS client) which needs to resolve domain name to IP address. This request is routed via local port to VPP1, where it is routed to DNS server in VM1. If this resolution is required for the first time, then the request will be sent to the external DNS server. Otherwise, local DNS server will serve this request.
As a company with highly skilled people and experience in networking and ODL, PANTHEON.tech provides solutions to any problem or requirement our clients bring up. In this case, we are going to illustrate what we can do on showcasing the workflow of a project.
Identifying a need
The first step was to identify a need; one of the main issues of working with data-store is that we lose data when the Controller goes down.
Proposing a solution
Once we’ve identified the need, we start looking for possible solutions, analyzing each one’s pros and cons, looking for the best answer available. In this case, the best available solution was to replace the in-memory ODL datastore with a persistent database: the Apache Cassandra Data Store.
What is Cassandra?
If you need scalability and high availability without compromising performance, the Apache Cassandra database is the right choice for you. It is the perfect platform for mission-critical data thanks to linear scalability and proven fault-tolerance on cloud infrastructure or commodity hardware.
Cassandra is able of replicating across multiple datacenters and it’s best in the class. With her, your users are provided with lower latency – and you with peace of mind, if you realize how simple surviving a regional outage is.
Defining the solution requirements
We need to define the requirements for the proposed solution: what will it do, and how, requirements from the user. For this project, we’ve decided that the user would need to register the service at a specific prefix, pointing at a specific path on a shard which the user is interested in storing.
The service will be listening to any changes under this and whenever the information is updated, it will take care of transforming the information into the JSON format, and store it in Cassandra.
Implementing the solution & testing
We’ve defined the requirements and have selected the solution. We’ve identified the steps required/wanted to achieve the results expected. Based upon them, we’ve created the tasks required and have implemented them. Finally, we shall test the result. We can see some of the anticipated results in the table below.
Rate: Writes per second rate.
Duration: Request duration in milliseconds.
Count: A number of changes applied to simulated
* Benchmark, Karaf and Cassandra were running under same Virtual Machine, with 8G RAM and 4 Processors dedicated
Use-cases
We’ve identified one use case for this project – which is to have a persistent data-store. But the list of possible benefits does not end there.
Given the case that we were storing the OpenFlow statistics, we could benefit from having that information using Spark for applying Real-time data analytics & visualization on it. This would allow us to react and improve our network by, for example, banning or redirecting heavy traffic. Once we have the information, everything we need is to pick up the fruit.
In mid-October, the SDN NFV World Congress will dominate Europe’s IT landscape. Taking place in Netherlands’ Hague, the event is Europe’s largest dedicated forum addressing the growing markets of software-defined networking (SDN) and network functions virtualization (NFV).
Naturally, this is the type of event we at Pantheon Technologies gravitate towards sponsoring. We’re officially one of the partners of this event. There were already a couple of interesting names on board (Open Networking Foundation, Intel, Telefonica, BT, Konia, Orange…) so how could we be the one to miss out?
If you’d like to hear about technologies such as OpenDaylight, FD.io, OPNFV and many more – and learn about the magic we can work with them, we’ll be looking forward to talking to you live! Also, if you just want to know us, or only have a chat, feel free to drop by!
Martin Firak
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.