PANTHEON.tech
  • About Us
    • Expertise
    • Services
    • References & Partners
    • Tools
  • Products
    • Orchestration
    • Automation
    • Network Functions
    • Security
  • Blog & News
  • Career
  • Contact
  • Click to open the search input field Click to open the search input field Search
  • Menu Menu
  • Link to LinkedIn
  • Link to Youtube
  • Link to X
  • Link to Instagram
Clustering using lighty.io RNC

[lighty.io] Create & Use Containerized RNC App w/ Clustering

October 5, 2022/in Blog /by PANTHEON.tech

The lighty.io RESTCONF-NETCONF (RNC) application allows to easily initialize, start and utilize the most used OpenDaylight services, including clustering and optionally add custom business logic. PANTHEON.tech provides a pre-prepared Helm chart inside the lighty.io RNC application, which can be easily used for Kubernetes deployment.

Clustering is a mechanism that enables multiple processes and programs to work together as one entity.

For example, when you search for something on google.com, it may seem like your search request is processed by only one web server. In reality, your search request is processed by many web servers connected in a cluster. Similarly, you can have multiple instances of OpenDaylight working together as one entity.

The advantages of clustering are:

  • Scaling: If you have multiple instances of OpenDaylight running, you can potentially do more work and store more data than you could with only one instance. You can also break up your data into smaller chunks (shards) and either distribute that data across the cluster or perform certain operations on certain members of the cluster.
  • High Availability: If you have multiple instances of OpenDaylight running and one of them crashes, you will still have the other instances working and available.
  • Data Persistence: You will not lose any data stored in OpenDaylight after a manual restart or a crash.

This article demonstrates, how to configure the lighty.io RNC application to use clustering, rescaling of the cluster and connecting a device to the cluster.

Add the PANTHEON.tech Helm Repository to MicroK8

For deploying the lighty.io RNC application, we will use and show how to install the microk8s Local Kubernetes engine. Feel free to use any other of your favorite local Kubernetes engines, which you have installed.

1. Install microk8s with snap.

sudo snap install microk8s --classic
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube

2. Enable the required add-ons.

microk8s enable dns helm3

3. Add the PANTHEON.tech Helm repository to microk8s.

microk8s.helm3 repo add pantheon-helm-repo https://pantheontech.github.io/helm-charts/

4. Update the repository.

microk8s.helm3 repo update

Configuring your lighty.io RNC Clustering app

Before we demonstrate how our cluster functions, we need to properly configure the lighty.io RNC app.

The lighty.io RNC application could be configured through Helm values file. The default RNC app values.yaml file can be found inside the lighty.io GitHub.

1. Set up the configuration using –set flag, to enable clustering, using:

  • lighty.replicaCount= 3 // This allows you to configure the size of the cluster.
  • lighty.akka.isSingleNode=false // If true, akka configuration would be overwritten with default configuration.
  • nodePort.useNodePort=false // To use Clustering service, rather than NodePort service.
  • lighty.moduleTimeOut=120 // Cluster takes some time to deploy, set time out to higher value.

*Note: lighty.akka.isSingleNode is required to be set to false, when using clustering.

microk8s.helm3 install lighty-rnc-app pantheon-helm-repo/lighty-rnc-app-helm --version 16.1.0 --set lighty.replicaCount=3,lighty.akka.isSingleNode=false,nodePort.useNodePort=false,lighty.moduleTimeOut=120

To modify a running configuration after deploying the app, just change the “install” to “upgrade”.

1.1 Verify, that the lighty-rnc-app is deployed.

microk8s.helm3 ls

Afterwards, you should see the lighty-rnc-app with status “deployed”:

Verify Instance Clustering

2. Set up the configuration, using configured values.yaml file.

2.1 Download the values.yaml file from lighty-core.

2.2 Update the image to your desired version, for example:

image:
  name: ghcr.io/pantheontech/lighty-rnc
  version: 16.1.0
  pullPolicy: IfNotPresent

2.3 Update the values.yaml file using:

  • lighty.replicaCount= 3 // This allows you to configure the size of the cluster.
  • lighty.akka.isSingleNode=false // If true, akka configuration would be overwritten with default configuration.
  • nodePort.useNodePort=false // To use ClusterIp service rather than NodePort service
  • lighty.moduleTimeOut=120 // Cluster takes some time to deploy. Set time out to higher value.

Note: lighty.akka.isSingleNode is required to be false when using clustering.

2.4 Deploy the lighty.io RNC app with changed values.yaml file.

microk8s.helm3 install lighty-rnc-app pantheon-helm-repo/lighty-rnc-app-helm --version 16.1.0 --values [VALUES_YAML_FILE]

To modify a running configuration after deploying the app, just change the “install” to “upgrade”.

3. Verify that all pods started. You should see as many pods, as you set the value of replicaCount.

microk8s.kubectl get pods

Create a testing device w/ lighty.io NETCONF Simulator

For testing purposes, we will need some devices. The ideal tool in this case is the lighty.io NETCONF Simulator. We will use this device and start it inside a Docker container. A Docker file can be found inside lighty.io, which can create an image for this simulated device.

1. Download the lighty.io NETCONF Simulator Docker file to a separate folder.

2. Create a Docker image from the Docker file.

sudo docker build -t lighty-netconf-simulator .

3. Start the Docker container with a testing device at Port 17830 (or any other port), by changing the -p parameter.

sudo docker run -d --rm --name netconf-simulator -p17830:17830 lighty-netconf-simulator:latest

4. Check the IP address, assigned to the Docker container. This parameter will be used as the DEVICE_IP parameter in requests.

docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' netconf-simulator

Clustering in the lighty.io RNC App

As mentioned at the beginning of this tutorial, the key to understanding clustering is to see many of the same instances as one functioning entity.

To demonstrate that our cluster in lighty.io RNC is alive and well, we will demonstrate its “keep alive” function. This shows, that when we manually remove the leader from a cluster, the entire system will not crumble, but hold an election and elect another leader. In practical terms – a cluster will continue to function, even when a leader is terminated/down, by self-sufficiently electing a new one and therefore showing, that it functions as an entity, or cluster.

1. Show IP addresses of all pods.

microk8s.kubectl get pods -l app.kubernetes.io/name=lighty-rnc-app-helm -o custom-columns=":status.podIP"

2. Use one of the IP addresses from the previous step, to view members of the cluster.

curl  --request GET 'http://[HOST_IP]:8558/cluster/members/'

Your response should look like this:

{
    "leader": "akka://opendaylight-cluster-data@10.1.101.177:2552",
    "members": [
        {
            "node": "akka://opendaylight-cluster-data@10.1.101.177:2552",
            "nodeUid": "4687308041747729846",
            "roles": [
                "member-10.1.101.177",
                "dc-default"
            ],
            "status": "Up"
        },
        {
            "node": "akka://opendaylight-cluster-data@10.1.101.178:2552",
            "nodeUid": "-29348997399314594",
            "roles": [
                "member-10.1.101.178",
                "dc-default"
            ],
            "status": "Up"
        }
    ],
    "oldest": "akka://opendaylight-cluster-data@10.1.101.177:2552",
    "oldestPerRole": {
        "member-10.1.101.177": "akka://opendaylight-cluster-data@10.1.101.177:2552",
        "dc-default": "akka://opendaylight-cluster-data@10.1.101.177:2552",
        "member-10.1.101.178": "akka://opendaylight-cluster-data@10.1.101.178:2552"
    },
    "selfNode": "akka://opendaylight-cluster-data@10.1.101.177:2552",
    "unreachable": []
}

In the response, you should see which member was elected as the leader, as well as all other members.

  • Tip: Use Postman for better response readability

3. Add the device to one of the members.

curl --request PUT 'http://[HOST_IP]:8888/restconf/data/network-topology:network-topology/topology=topology-netconf/node=new-node' \
--header 'Content-Type: application/json' \
--data-raw '{
    "netconf-topology:node": [
        {
            "node-id": "new-node",
            "host": [DEVICE_IP],
            "port": 17830,
            "username": "admin",
            "password": "admin",
            "tcp-only": false,
            "keepalive-delay": 0
        }
    ]
}

4. Verify that the device was added to all members of the cluster.

curl --request GET 'http://[MEMBER_1_IP]:8888/restconf/data/network-topology:network-topology/topology=topology-netconf/node=new-node'
curl --request GET 'http://[MEMBER_2_IP]:8888/restconf/data/network-topology:network-topology/topology=topology-netconf/node=new-node'
...

Every member of the cluster should return the same device, with the same values:

{
    "network-topology:node": [
        {
            "node-id": "new-node",
            "netconf-node-topology:connection-status": "connected",
            "netconf-node-topology:username": "admin",
            "netconf-node-topology:password": "admin",
            "netconf-node-topology:available-capabilities": {
                "available-capability": [
                    {
                        "capability": "urn:ietf:params:netconf:base:1.1",
                        "capability-origin": "device-advertised"
                    },
                    {
                        "capability": "urn:ietf:params:netconf:capability:notification:1.0",
                        "capability-origin": "device-advertised"
                    },
                    {
                        "capability": "urn:ietf:params:netconf:capability:candidate:1.0",
                        "capability-origin": "device-advertised"
                    },
                    {
                        "capability": "urn:ietf:params:netconf:base:1.0",
                        "capability-origin": "device-advertised"
                    },
                    {
                        "capability": "(urn:ietf:params:xml:ns:yang:ietf-inet-types?revision=2013-07-15)ietf-inet-types",
                        "capability-origin": "device-advertised"
                    },
                    {
                        "capability": "(urn:opendaylight:yang:extension:yang-ext?revision=2013-07-09)yang-ext",
                        "capability-origin": "device-advertised"
                    },
                    {
                        "capability": "(urn:tech.pantheon.netconfdevice.network.topology.rpcs?revision=2018-03-20)network-topology-rpcs",
                        "capability-origin": "device-advertised"
                    },
                    {
                        "capability": "(urn:ietf:params:xml:ns:yang:ietf-yang-types?revision=2013-07-15)ietf-yang-types",
                        "capability-origin": "device-advertised"
                    },
                    {
                        "capability": "(urn:TBD:params:xml:ns:yang:network-topology?revision=2013-10-21)network-topology",
                        "capability-origin": "device-advertised"
                    },
                    {
                        "capability": "(urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring?revision=2010-10-04)ietf-netconf-monitoring",
                        "capability-origin": "device-advertised"
                    },
                    {
                        "capability": "(urn:opendaylight:netconf-node-optional?revision=2019-06-14)netconf-node-optional",
                        "capability-origin": "device-advertised"
                    },
                    {
                        "capability": "(urn:ietf:params:xml:ns:netconf:notification:1.0?revision=2008-07-14)notifications",
                        "capability-origin": "device-advertised"
                    },
                    {
                        "capability": "(urn:opendaylight:netconf-node-topology?revision=2015-01-14)netconf-node-topology",
                        "capability-origin": "device-advertised"
                    }
                ]
            },
            "netconf-node-topology:host": "172.17.0.2",
            "netconf-node-topology:port": 17830,
            "netconf-node-topology:tcp-only": false,
            "netconf-node-topology:keepalive-delay": 0
        }
    ]
}

5. Find the leader and delete it

5.1 Show all pods. This returns the names of each pod.

microk8s.kubectl get pods

5.2 Find the leader by IP address

microk8s.kubectl get pod [NAME] -o custom-columns=":status.podIP" | xargs

In step 4., we found which IP address was assigned to the leader. Use the command above to find the name of this pod, using its IP address.

5.3 Delete the pod.

microk8s.kubectl delete -n default pod [LEADER_NAME]

5.4 Verify that the pod is terminating

microk8s.kubectl get pods

You should see the old leader with status “Terminating” and a new pod running, which will replace it.

Terminating Pod Leader

6. Verify, that a new leader was elected

curl  --request GET 'http://[HOST_IP]:8558/cluster/members/'
{
    "leader": "akka://opendaylight-cluster-data@10.1.101.178:2552",
    "members": [
        {
...

As a result of deleting the pod, a new leader was elected from the previous pods.

7. Verify that the new pod also contains the device. The response should be the same as in step 4.

curl --request GET 'http://[NEW_MEMBER_IP]:8888/restconf/data/network-topology:network-topology/topology=topology-netconf/node=new-node'

8. Delete the device

curl --request DELETE 'http://[HOST_IP]:8888/restconf/data/network-topology:network-topology/topology=topology-netconf/node=new-node'

9. Verify that the testing device was removed from all cluster members

curl --request GET 'http://[MEMBER_1_IP]:8888/restconf/data/network-topology:network-topology/topology=topology-netconf'
curl --request GET 'http://[MEMBER_1_IP]:8888/restconf/data/network-topology:network-topology/topology=topology-netconf'
...

Your response should look like this:

{
    "network-topology:topology": [
        {
            "topology-id": "topology-netconf"
        }
    ]
}

10. Logs from the device can be shown by executing this command:

sudo docker logs [CONTAINER ID]

11. Logs from the lighty.io RNC app can be shown by executing the following command:

microk8s.kubectl logs [POD_NAME]

This post explained how to start a containerized, lighty.io RNC application and configure it to use clustering. Furthermore, an explanation of how to make your own cluster configuration either from a file or using –set flag was shown, to set the size of the cluster to your liking.  Rescaling of the cluster and connecting a simulator device to the cluster was also shown and explained.


by Tobiáš Pobočík & Peter Šuňa | Leave us your feedback on this post!

You can contact us here.

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

What is Multus? Explanation by PANTHEON.tech

[What Is] Multus CNI

May 20, 2022/in Blog, CDNF.io /by PANTHEON.tech

Multus CNI (Container Network Interface) is a novel approach to managing multiple CNIs in your container network (Kubernetes). Based on its name, which means multiple in Latin, Multus is an open-source plug-in, which serves as an additional layer in a container network, to enable multi-interface support. For example, Virtual Network Functions (VNFs) often depend on connectivity towards multiple network interfaces.

The CNI project itself, backed by the Cloud Native Computing Foundation, defines a minimum specification on what a common interface should look like. The CNI project consists of three primary components:

  • Specification: an API that lies between the network plugins and runtimes
  • Plugins: depending on use-cases, they help set up the network
  • Library: CNI specifications as Go implementations, which are then utilized by runtimes

Each CNI can deliver different results, which makes Multus a wonderful plugin to manage these functionalities and make them work together.

Multus delivers this functionality in form of a contact between the container runtime and a selection of plugins, which are called upon to do the actual net configuration tasks.

Multus Characteristics

  • Manage contact between container runtime and plugins
  • No net configuration by itself (dependent on other plugins)
  • Uses Flannel to group plugins into delegates
  • Support for reference & 3rd party plugins
  • Supports SRIOV, DPDK, OVS-DPDK & VPP workloads with cloud-native & NFV based applications

Multus Plugin Support & Management

Currently, Multus supports all plugins maintained in the official CNI repository, as well as 3rd party plugins like Contiv, Cilium or Calico.

Management of plugins done by handling plugins as delegates (using Flannel), which can be invoked into a certain sequence, based on either a JSON scheme or CNI configuration. Flannel is an overlay network in Kubernetes, which configures layer 3 network fabric and therefore satisfies Kubernetes requirements (run by default on many plugins). Multus then invokes the eth0 interface in the pod for the primary/master plugin, while the rest of the plugins receive netx interfaces (net0, net1, etc.).

StoneWork in K8s Pod with Multiple Interfaces

Our team created a demo on how to run StoneWork in a Microk8s pod with multiple interfaces attached via the Multus add-on.

This example attaches two existing host interfaces to the Stonework container running on a Microk8s pod. Highlights include the option to add multiple DPDK interfaces, as well as multiple af_packet interfaces to StoneWork with this configuration.

If you are interested in more details regarding this implementation, contact us for more information!

Utilizing Cloud-Native Network Functions

If you are interested in high-quality CNFs for your next or existing project, make sure to check out our portfolio of cloud-native network functions, by PANTHEON.tech.


Leave us your feedback on this post!

You can contact us here.

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

 

bierman RESTCONF

[OpenDaylight] Migrating Bierman RESTCONF to RFC 8040

May 3, 2022/in Blog, OpenDaylight /by PANTHEON.tech

The RESTCONF protocol implementation draft-bierman-netconf-restconf-02, named after A. Bierman, is HTTP-based and enables manipulating with YANG-defined data sets using a programmatic interface. It relies on the same datastore concepts as NETCONF, with modifications to enable HTTP-based CRUD operations.

Learn how to migrate from the legacy ‘draft-bierman-netconf-restconf-02‘ to RFC8040 in OpenDaylight.

NETCONF vs. RESTCONF

While NETCONF uses SSH for network device management, RESTCONF supports secure HTTP access (HTTPS). RESTCONF also allows for easy automation through a RESTful API, where the syntax of the datastore is defined in YANG.

YANG is a data modeling language used for model configuration – such as state data, or administrative actions. PANTHEON.tech offers an open-source tool for verifying YANG data in OpenDaylight or lighty.io, as well as an IntelliJ plugin called YANGinator.

What is YANG?

The YANG data modeling language is widely viewed as an essential tool for modeling configuration and state data manipulated over NETCONF, RESTCONF, or gNMI.

RESTCONF/NETCONF Architecture

NETCONF defines configuration datastores and a set of CRUD operations (create, retrieve, update, delete). RESTCONF does the same but adheres to REST API & HTTPS compatibility.

The importance of RESTCONF lies therefore within its programmability and flexibility in network configurations automation use-cases.

By design, the architecture of this communication looks the same – a network device, composed of a data store (in YANG) and server (RETCONF or NETCONF), communicates with the target client through a protocol (RESTCONF or NETCONF):

RESTCONF/NETCONF flow

RESTCONF/NETCONF client communication flow.

RESTful API

REST is a generally established set of rules for establishing stateless, dependable online APIs. RESTful is an informal term for this web API, that follows the REST requirements.

RESTful APIs are primarily built on HTTP protocols for accessing resources via URL-encoded parameters and data transmission, using JSON or XML.

OpenDaylight was one of the early adopters of the RESTCONF protocol. For increased compatibility, two RESTCONF implementations are supported today – the legacy draft-bierman-netconf-restconf-02 & RFC8040.

What’s New in RFC8040?

The biggest, newest difference in the RFC8040 implementation of RESTCONF, in comparison to the legacy Bierman implementation, is the transition to YANG 1.1 support.

YANG 1.1 introduces a new type of RPC operation, called actions. These actions enable RPC operations to be attached to selected nodes in the data schema. YANG Library is a set of YANG modules with their revisions, features, and other rewritten functions.

Other new features include fresh XPath functions, an option to define asynchronous notifications (with schema nodes), and more. For a more detailed insight, we recommend reading this comprehensive list of changes.

Migration from Legacy RESTCONF Implementation

Since the RFC8040 RESTCONF implementation is now in General Availability and ready to replace the legacy Bierman draft, PANTHEON.tech has decided to stop supporting the draft implementation and help customers with migration.

Contact PANTHEON.tech for support in migrating RESTCONF implementations from draft-bierman-netconf-restconf-02 to RFC8040.

Why open-source matters

[Opinion] Why Open-Source Matters

April 22, 2022/in Blog /by PANTHEON.tech

PANTHEON.tech is an avid open-source supporter. As the largest contributor to the OpenDaylight source-code, as well as being active in many more open-source projects and creating our own, we believe that open software is a way of enriching collaboration and making better products overall.

Open-source is also a philosophy of freedom, meaningfulness, and the idea that wisdom should be shared.

Practical consequences of software being open are far-reaching and have great significance – much bigger than an uninvolved observer would guess at a first glance. To see the consequences, you need to take a look at a big enough project. Or even better, at an entire infrastructure, consisting of open-source projects.

Collaboration in Open-Source

A big advantage open-source brings is collaboration. This might sound obvious, but it has several forms:

  • The collaboration between companies, even competitors, contributing their own piece to the common open code, results in multiple-times better products, compared to standalone work.
  • The collaboration of academics. The best results were always done with academic precision.

Research & Academics in Open-Source

Researchers prefer to spend their valuable time working on open code and open research. Some run their state-of-art static or formal analysis tools as part of their research upon open-source projects and then send patches.

Some design well-defined APIs, as for example UNIX POSIX. Others solve known problems using open-source platforms as a base, for example, NixOS, which solves so many problems with packaging.

But why do academics prefer open code? Not only because open code is more accessible. Some of them prefer open code, due to a philosophical point of view – since a code is similar to a mathematical equation or other scientific findings, it should be published publically.

A code should be perfect, consistent, and bug-free. That’s why collaboration with people having this mindset is invaluable.

Community & Collaboration

Community collaboration. There is no small amount of excellent engineers contributing to open-source in their free time either for fun, own needs, belief in open source philosophy, or to help other people and seeking sense in their work.

Informal collaboration – mailing lists, IRC channels, Stack Overflow, blogs, and forums. Everyone who developed a closed-sourced project knows what I am talking about. How much easier is it to find needed information about an open-source project, whether it is a guide, documentation, explanation of error message, or bug.

This makes work on open-source projects much more effective and simple.

Verification in Open-Source

The verification side of collaboration is crucial. The amount of eyeballs controlling your code, again and again, is a huge advantage, compared to the closed-source way of a one or two-person, one-time review. For the same reason, Linus Torvalds said:

I made it publicly available but I had no intention to use the open-source methodology, I just wanted to have comments on the work.

Different people have different ideas and points of view. The involvement of lots of computer science experts makes the results far more promising and objective.

The overall time diverse people spend working on open-source is much bigger than the costs any corporation would be willing to pay for their closed source equivalent.

But there is much more about open source than just better collaboration. Open-source is also a philosophy of freedom, meaningfulness, and the idea that wisdom should be shared.

People would rather create something that makes sense and that might help other people, as we are social beings. Also, using something without the necessity of being bound in any way.

Imagine a Closed World

Now, just for contradiction, let’s imagine everything was closed – even standards and protocols in one big monopoly.

You would be forced to use only their technology, to buy services or updates to be able to work as usual. And what’s worse? What if the company stops to deliver or support something, you really need? For example, formatting of your important documents.

Kind of scary. Let’s rather imagine an open world, where absolutely everything is done in an open-source manner, even cars, electrical appliances, everything. I believe that now you got the importance of open-source philosophy as well.

Open source is the future. It’s modern, and it’s the right way to go from many points of view. Even big players who initially didn’t like the idea are now involved, at least partially.

But how can we adapt to these changes? How it is possible to be in business with open-source?

Stay tuned for Part 2, where we might touch on this topic.


by Július Milan | Leave us your feedback on this post!

You can contact us here.

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

stonework is is

[StoneWork] IS-IS Feature

February 3, 2022/in Blog /by PANTHEON.tech

PANTHEON.tech continues to develop its Cloud-Native Network Functions portfolio with the recent inclusion of Intermediate System-to-Intermediate System (IS-IS) routing support, based on FRRouting. This inclusion complements and augments our current Stonework Enterprise routing offerings, providing customers with a choice of the usual networking vendors’ solutions.

Leveraging FRRouting, a Linux Foundation, open-source, industry-leading project, PANTHEON.tech provides a comprehensive suite of routing options. These include OSPF, BGP, and now IS-IS, which can fully integrate and interoperate with existing, or new, networking requirements.

As a cloud-native network function, our solution is designed to maximize container-based technologies and micro-services architecture.

We provide IS-IS feature to our customers with the following options:

  • IS-IS CNF – Standalone CNF appliance with IS-IS support
  • Stonework Enterprise – Security, switching, and routing features, now with IS-IS support

StoneWork Enterprise & IS-IS Integration

The control plane is based on a Ligato agent, which configures every aspect of FRR. The two protocols, OSPF & IS-IS, are running on separate daemons. Information (routes) are stored in Zebra (FRR IP routing manager), which translates these routes to the Linux kernel, towards the default Linux routing table.

The data plane forwards this information via a TAP tunnel, towards a VPP instance (supporting IS-IS & OSPF), which together with another Ligato agent in the StoneWork container enables OSPF & IS-IS functionality in a FRRouting, cloud-native network function.

With the power of containerization and enterprise-grade routing protocols, StoneWork Enterprise enables network service providers to easily get on board with cloud-native acceleration and enjoy all of its benefits.

What is FRRouting?

FRRouting (FRR) is a completely open-source internet routing protocol suite, with support for BGP, OSPF, OpenFabric, and more. FRR provides IP routing services, routing & policy decisions, and general exchange of routing information with other routers. Its incredible speed is achieved by installing routing decisions directly into the OS kernel. FRR supports a wide range of L3 configurations and dynamic routing protocols, making it a flexible and lightweight choice for a variety of deployments.

The magic of FRR lies within its natural ability to integrate with the Linux/Unix IP networking stack. This in turn allows for the development of networking use-cases – be it LAN switching & routing, internet access routers or peering, and even connecting hosts/VMs/containers to a network. It is hosted as a Linux Foundation collaborative project.

What is the IS-IS protocol?

The Intermediate System to Intermediate System is one of the most commonly deployed routing protocols across large network service providers and enterprises.

Specifically, it is an interior gateway protocol (IGP), used for exchanging router information within an autonomous system. IS-IS operates over L2 and doesn’t require IP connectivity and provides more security. It is more flexible and scalable than the OSPF protocol.

While different use-cases might require a substitution for IS-IS, our StoneWork Enterprise solution has recently enabled IS-IS integration as part of FRRouting, including a possible IS-IS CNF (Cloud Native Network Function) – standalone appliance.

Buy StoneWork Enterprise today!

If you are interested in StoneWork Enterprise, make sure to contact us today for a free, introductory consultation!

 

StoneWork is a high-performance, all-(CNFs)-in-one network solution.

 

Thanks to its modular architecture, StoneWork dynamically integrates all CNFs from our portfolio. A configuration-dependent startup of modules provides feature-rich control plane capabilities by preserving a single, high-performance data plane.

This way, StoneWork achieves the best-possible resource utilization, unified declarative configuration, and re-use of data paths for packet punting between cloud-native network functions. Due to utilizing the FD.io VPP data plane and container orchestration, StoneWork shows excellent performance, both in the cloud and bare-metal deployments.


by PANTHEON.tech | Leave us your feedback on this post!

You can contact us here!

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

lighty orange

[lighty.io RNC] Create & Use Containerized RESTCONF-NETCONF App

January 27, 2022/in Blog /by PANTHEON.tech

The lighty.io RNC (RESTCONF-NETCONF) application allows to easily initialize, start and utilize the most used OpenDaylight services and optionally add custom business logic.

lighty.io RNC has been recently used in the first-ever production deployment of ONAP, by Orange.

This pre-packed container image served as a RESTCONF – NETCONF bridge for communication between the ONAP component CDS and Cisco® NSO.

Inside the app, we provide a pre-prepared Helm chart that can be easily used for Kubernetes deployment. This tutorial explains the step-by-step deployment of the lighty.io RNC application with Helm 2/3 and a custom, local Kubernetes engine.


Read the complete tutorial:



You will learn how to deploy the lighty.io RNC app via Helm 2 or Helm 3. While developers might still prefer to use HELM2, we have prepared scenarios for deployment in both versions of this Kubernetes package manager.

It is up to you, which Kubernetes engine you will pick for this deployment. We will be using and going through the installation of the microk8s Local Kubernetes engine.

  • Helm 2: Condition is to use k8s versions 1.21 or lower
  • Helm 3: Condition is to use k8s version 1.22 and higher

Likewise, we will show you how to simulate NETCONF devices through another lighty.io tool. The tutorial will finish by showing you simple CRUD operations for testing the lighty.io RNC App.

Why lighty.io?

PANTHEON.tech’s enhanced OpenDaylight based software offers significant benefits in performance, agility, and availability.

As the ongoing lead contributor to Linux Foundation’s OpenDaylight, PANTHEON.tech develops lighty.io, with the ultimate goal of enhancing the network controller experience for a variety of use-cases.

lighty.io offers a modular and scalable architecture, allowing dev-ops greater flexibility to deploy only those modules and libraries that are required for each use-case. This means that lighty.io is not only more lightweight at run time, but also more agile at development and deployment time.

And since only the minimum required set of components and libraries is present in a deployment at runtime, stability is enhanced as well.

Due to its modular capability, lighty.io provides a stable & scalable network controller experience, ideal for production deployments, enabling the burgeoning multitude of 5G/Edge use cases.

Learn more by visiting the official homepage or contacting us!


by Peter Šuňa | Leave us your feedback on this post!

You can contact us here!

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

2021

2021 | A Look Back

December 17, 2021/in Blog /by PANTHEON.tech

Join us in reminiscing and reminding you, what PANTHEON.tech has managed to create and participate in 2021.

Most significantly, PANTHEON.tech has celebrated 20 years of its existence.

We have participated in two conferences this year, validated our position within the OpenDaylight community, and expanded our product portfolio!

lighty.io: Releases & Tools

The lighty.io team at PANTHEON.tech has gone through several releases in 2021, as well as a bunch of tools to accompany it along the way!

lighty.io RESTCONF-NETCONF Controller

  • Out-of-the-box
  • Pre-packaged
  • Microservice-ready application

Easily use for managing network elements in your SDN use case.

You can read more about the project on our dedicated GitHub readme, or in the in-depth article below:

lighty.io YANG Validator

Create, validate and visualize the YANG data model of their application, without the need to call any other external tool – just by using the lighty.io framework.

YANGinator | Validate YANG files in IntelliJ

Built for one of the most popular IDEs, YANGinator is a plugin for validating YANG files without leaving your IDEs window!

Visit the project on GitHub, or on the official marketplace.

Conferences & Events

PANTHEON.tech has sponsored and attended two conferences this year. Broadband World Forum in Amsterdam, and Open Networking & Edge Summit + Kubernetes on Edge Day, which was ultimately held in a virtual environment.

Our network controller SDK, lighty.io, was mentioned during the OCP Global Summit, as part of a SONiC Automation use-case.

lighty.io was also part of the first-ever, production deployment of ONAP by Orange.

News, Blogs & Use-Cases

We encourage you to read through over 28 original posts on our webpage. The topics range from explanations of recent network trends to interesting use-cases for enterprises.

Lanner Inc. has established a partnership with PANTHEON.tech to successfully validate StoneWork’s all-in-one Cloud-Native Functions solution on Lanner’s NCA-1515 uCPE Platforms.

PANTHEON.tech has been recognized as a member of the Intel Network Builders program Winners Circle. 

Pantheon Fellow and one of the elite experts on OpenDaylight, Robert Varga, was invited for a podcast episode with the Broadband Bunch!

CDNF.io | A cloud-native portfolio

Our cloud-native network functions portfolio, CDNF.io, includes a combined data/control plane – StoneWork – and 18 CNFs.

Learn more about this expanding project by visiting its homepage, CDNF.io, or contact us directly to explore your options.

SandWork| Automate & Orchestrate Your Network

The ultimate network orchestration tool for your enterprise network. This Smart Automated Net/Ops & Dev/Ops platform orchestrates your networks with a user-friendly UI. Manage hundreds to thousands of individual nodes with open-source tech and enterprise support & integration.

Contact us to learn more about SandWork!

lighty ONAP

[lighty.io] Orange Deploys ONAP in Production

November 22, 2021/in Blog /by PANTHEON.tech

On the 11th of November, Orange hosted a webinar with The Linux Foundation, showcasing their production deployment of ONAP to automate their IP/MPLS transport network infrastructure.

Learn how to containerize and deploy your own lighty.io RNC instance in this tutorial!

[lighty.io RNC] Create & Use Containerized RESTCONF-NETCONF App

PANTHEON.tech is extremely proud to see their OpenDaylight based lighty.io project, included in this, Orange’s first, successful production ONAP deployment. lighty.io is an open-source project developed and supported by PANTHEON.tech.

 


Open-Source Matters

Orange’s vision and target are to move from a closed IT architecture to an open platform architecture based on the Open Digital Framework, a blueprint produced by members of TMForum. ONAP provides ODP compliant functional blocks to cover the delivery of many use-cases like service provisioning of 5G transport, IP, fixed access, microwave, optical networks services, as well as CPE management, OS upgrades, and many others.

Why lighty.io?

The SDN-C is the network controller component of ONAP – managing, assigning, and provisioning network resources. It is based on OpenDaylight, with additional Directed Graph Execution capabilities. However, lighty.io, PANTHEON.tech’s enhanced OpenDaylight based software, offers significant benefits in performance, agility, and availability, taking SDN-C to another level.

OpenDaylight vs. lighty.io

As the ongoing lead contributor to Linux Foundation’s OpenDaylight, PANTHEON.tech develops lighty.io, with the ultimate goal of enhancing the network controller experience for a variety of use-cases.

With increased functionality to upgrade dependencies and packaged use-case-based applications, lighty.io brings Software-Defined Networking to a Cloud-Native environment.

lighty.io offers a modular and scalable architecture, allowing dev-ops greater flexibility to deploy only those modules and libraries that are required for each use-case. This means that lighty.io is not only more lightweight at run time, but also more agile at development and deployment time.

And since only the minimum required set of components and libraries is present in a deployment at runtime, stability is enhanced as well.

Due to its modular capability, lighty.io provides a stable & scalable network controller experience, ideal for production deployments, enabling the burgeoning multitude of 5G/Edge use cases.

A reduced codebase footprint and packaging makes lighty.io more:

  • Lightweight
  • Agile
  • Stable
  • Secure
  • Scalable

PANTHEON.tech also provides other complementary tools, plugins, and additional features, like:

  • Java Protocol libraries
  • Network Simulators
  • YANG Validators
  • Pre-packaged, container use-case driven application, with Helm charts
    • lighty.io RESTCONF-NETCONF Application
    • lighty.io RESTCONF-gNMI Application

 Orange Egypt: ONAP Production Deployment

Orange Egypt uses ONAP as a northbound, higher-level orchestration, for L3VPN service provisioning.

The user leverages a GUI to enter network service parameters. The request is then submitted to the ONAP Service Orchestrator (SO), powered by Camunda‘s BPMN workflow engine, through a REST API and custom-designed workflow is executed.

Afterward, an instantiated ONAP SO workflow triggers the ONAP Controller Design Studio (CDS) using a REST API to run custom blueprint packages to start the provisioning of L3VPN network services.

In this production deployment, lighty.io is used for communication between CDS and Cisco NSO. The application used in the ONAP production deployment is lighty-RNC – a pre-packaged container image, providing a RESTCONF-NETCONF bridge.

CDS assigns required resources (IP + VLAN) from NETBOX and lighty.io secret from Vault. After successful allocation of needed resources by CDS, lighty.io then sends the information to Cisco NSO to configure devices via a NETCONF session.

Orange ONAP Production

Source: YouTube, LFN Webinar: Orange Deploys ONAP In Production

Future of ONAP

Orange demonstrated ONAP in an IP/MPLS backbone automation use-case, but that is just the beginning. ONAP is already starting to look into automation and SDN control of microwave and optical networks, paving the way for other operators to deploy ONAP in their use cases.

You can watch the entire webinar here:


by PANTHEON.tech | Leave us your feedback on this post!

You can contact us here!

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

Happy birthday to us

We Are 20 Years Old

November 2, 2021/in Blog /by PANTHEON.tech

In 2001, a friend asked me if I would like to start a company with him. Word got out, so in the end, we were four – a bunch of acquaintances who didn’t exactly know what they were doing. Among other activities, we did not have time for the company and this was fully reflected in the first financial statements.

In 2004, after graduating from university, I turned my attention back to the company. I had the ambition to run it and over time we agreed to rewrite all shares to me and my sister. It was then, that Pantheon became a FAMILY COMPANY.

In addition to solving common system integrations, we started by selling used hardware – our goal was retail. At that time, however, prices and margins began to fall sharply, so we switched to wholesale. We exported to the Middle East, Asia, or Africa. We gradually added ATMs, that left Slovakia for Hong Kong and Turkey.

I enjoyed this job. Pantheon had about 5 employees at the time. However, margins continued to decline and there was an opportunity to focus on software development. I did not say no to the offer – there was significantly more money in this area. At the time, I didn’t even think about whether I was going to “build a big business” or where I want to be in 20 years. It is very fashionable today to talk about it this way, but in reality, I would be lying.

In 2009, we signed our first contract in Silicon Valley. We were a good group, who enjoyed their jobs. In 2011 we opened our first office in the US and in 2013, we started to expand to Banská Bystrica & Žilina.

Thanks to the growth, we were able to gain new experience and transform it into our own product portfolio in the field of network technologies, such as lighty.io, CDNF.io, SandWork, and EntGuard. We also managed to create a comprehensive HR system, Chronos, which facilitates the management of all personnel processes, including records and attendance management of all employees.

We had more people at the peak than we could manage. It’s time to cut back a bit and make quality a top priority. Even today, we are still working on clearly communicating to employees, what work environment we support through our expectations and their benefits.

This year, we celebrate 20 years since our inception. During that time, we repaid our debts and took the opportunity to grow. We learned that growth is not directly proportional to the quality and we prefer to stay smaller, but more valuable for us and our clients. We have found that we will always look for and prefer people who are not afraid of challenges and self-reflection.

I am proud of our journey until now and I look forward to everything that comes. Mainly from the product point of view.

Tomáš Jančo, CEO

lighty.io 15

[Release] lighty.io 15 | The Ultimate OpenDaylight Companion

October 20, 2021/in Blog /by PANTHEON.tech

The 15th release of lighty.io is here, bringing a bunch of new features and even more improvements for you to create your SDN controller.

Parallel to working on OpenDaylight – PANTHEON.tech being the largest contributor to the OpenDaylight Phosphorus release – our team was working hard on releasing the newest version of lighty.io.

Of course, lighty.io adopted the latest Phosphorus upstream. So let’s have a look at what else is new in lighty.io 15!

[Feature] lighty.io gNMI Module – Simulator

The latest addition to lighty.io modules is the gNMI module for device simulation. This simulator simulates gNMI devices driven by gNMI proto files, with a datastore defined by a set of YANG models. gNMI is used for the configuration manipulation and state retrieval of gNMI devices.

The gNMI Simulator supports SONiC gNOI, to the extent of the following gNOI gRPCs:

  • file.proto:
    • get – downloads dummy file
    • stat – returns stats of file on path
  • system.proto:
    • time – returns current time
  • and these RPCs

Furthermore, we introduced the gNMI Force Capability, for overwriting used capabilities of gNMI devices in the gNMI SouthBound plugin.

[Use-Case] lighty.io gNMI/RESTCONF & Simulator

Our team also prepared a guide for quick-starting a pre-prepared gNMI/RESTCONF application with the gNMI device simulator.

Hand-in-hand, the lighty.io RESTCONF gNMI App now provides Docker & Helm support, for deployment via Kubernetes.

The example shows a gNMI south-bound interface, utilized with a RESTCONF north-bound interface to manage gNMI devices on the network.

This example works as a standalone SDN controller and is capable of connecting to gNMI devices and exposing connected devices over RESTCONF north-bound APIs.

[Improvements] Deprecations & Fixes

  • lighty.io RNC received a Postman collection for users to edit and bend for their own use.
  • We removed the OpenFlow plugin (in-line with future plans for OpenFlow in OpenDaylight), as well as the NETCONF-Quarkus App for lighty.io.
  • lighty-codes are now fully replaced by lighty-codecs-utils. 
  • A major cleanup of modules and their references was done as well.
  • Improvements were made for GitHub Workflows & SonarCloud reported issues, for code stability and hardening. 

Give lighty.io 15 a try and let us know what you think!


by the lighty.io Team| Leave us your feedback on this post!

You can contact us here!

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

BPMN ONAP

BPMN & ONAP in Spine-Leaf DC’s

September 2, 2021/in Blog, CDNF.io /by PANTHEON.tech

Enterprises require workflows to understand internal processes, how they apply to different branches, and divide responsibility to achieve a common goal. Using a workflow enables to pick & choose, which models are required.

Although there are many alternatives, BPMN is a standard widely used across several fields to graphically depict business processes and manage them.

Notable, although underrated, are its benefits for network administrators. BPMN enables network device management & automation, without having to fully comprehend the different programming languages involved in each task.

What is BPMN?

The Business Process Model & Notation (BPMN) standard graphically represents specifics of business processes in a business process model. In cooperation with the Camunda platform, which provides its own BPMN engine, it can do wonders with network orchestration automation.

BPMN lets enterprises graphically depict internal business procedures and enables companies to render these procedures in a standardized manner. Using BPMN removes the need for software developers to adjust business logic since the entire workflow can be managed through a UI.

In the case of network management, it provides a level of independence, abstracted from the network devices themselves.

This logic behind how business processes are standardized as workflows is present in the Open Network Automation Platform (ONAP) as well.

What is ONAP?

ONAP is an orchestration and automation framework, featuring an open-source software concept, that is robust, real-time, policy-driven, for physical and virtual network functions.

ONAP allows network scaling and VNF/CNF implementations in a fully automated manner. Read our in-depth post on what ONAP is and how you can benefit from its usage. BPMN is implemented within ONAP via Camunda.

Camunda is an open-source platform, used in the ONAP Service Orchestrator – where it serves as one of the core components of the project to handle BPMN 2.0 process flows.

Relationship between ONAP & BPMN

The Service Orchestrator (SO) component, includes a BPMN Execution Engine. Two Camunda products are utilized within ONAP SO:

  • Cockpit: View BPMN 2.0 workflow definitions
  • Modeler: Edit BPMN 2.0 process flows

The SO component is mostly composed of Java & Groovy code, including a Camunda BPMN code-flow.

PANTHEON.tech circumvents the need for SO and uses the Camunda BPMN engine directly. This resulted in a project with SO functionality, without the additional SO components – sort of a microONAP concept.

Features: Camunda & BPMN

The business process modeling is a single action of network orchestration. As with any project integration, it is important to emphasize the project’s strong points, which enabled us to achieve a successful use case.

Benefits of Camunda/BPMN

  • Automation: BPMN provides a library of reusable boxes, which make their use more accessible by avoiding/hiding unnecessary complexity
  • Performant BPMN Engine: the engine provides good out-of-the-box performance, with a variety of operator/DevOps UI tools, as well as BPMN modeling tools
  • User Interface: OOTB user interface, with the option of creating a custom user interface
  • DevOps: easy manipulation & development of processes
  • Scalability: in terms of performance tuning and architecture development for lots of tasks
  • Interoperability: extensible components, REST integration, or script hooks for Groovy, JavaScript & more
  • REST API: available for BPMN engine actions
  • Exceptional Error Handling
  • Scalability: tasks with high execution cadence can be externalized and be implemented as scalable microservices. That provides not only scalability to the system itself but can be applied to the teams and organizations as well
  • Process tracking: the execution of the process is persisted and tracked, which helps with system recovery and continuation of the process execution in partial and complete failure scenarios.

What PANTHEON.tech had to mitigate is, for example, parallelism – running several processes at once. Timing estimation limits the high precision configuration of network devices. Imagine you want to automate a process starting with Task 1. After a certain time, Task 2 takes effect. Timers in BPMN however need manual configuration to tune the interval between jobs & processes.

Our deep dive into this topic resulted in a concept for automating network configurations in spine-leaf data centers, using a lightweight ONAP SO architecture alternative.


Use Case: Virtual Network Configuration in Spine-Leaf Data Centers

PANTHEON.tech has achieved, that the design of this use-cases custom architecture is fully functional and meets the required criteria – to fully adopt network automation in a demanding environment.

Our use-case shows how BPMN can be used as a network configuration tool in, for example, data centers. In other words – how ONAP’s SO and lighty.io could be used to automate your data center.

If you are interested in this use case, make sure to contact us and we can brief you on the details.


by Filip Gschwandtner | Leave us your feedback on this post!

You can contact us here!

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

lighty.io gNMI RESTCONF

[lighty.io] Open-Source gNMI RESTCONF Application

June 29, 2021/in Blog /by PANTHEON.tech

The lighty.io gNMI RESTCONF app allows for easy manipulation of gNMI devices. PANTHEON.tech has open-sourced the gNMI RESTCONF app for lighty.io, to increase the capabilities of lighty.io for different implementations and solutions.

lighty.io gNMI RESTCONF App

Imagine CRUD operation on multiple gNMI devices, managed by one application – lighty.io. All requests towards the gNMI devices are executed by RESTCONF operations, while the response is formatted in JSON.

The most important lighty.io components used in the lighty.io gNMI RESTCONF application are:

  • lighty.io Controller – provides core OpenDaylight services (MD-SAL, YANG Tools, Global Schema Context & more) that are required for other services or plugins
  • lighty.io RESTCONF Northbound – provides the RESTCONF interface, used for communication with the application, via the RESTCONF protocol over HTTP
  • lighty.io gNMI Southbound – acts as the gNMI client. Manages connections to gNMI devices and gNMI communication. Currently supported gNMI capabilities are Get & Set

Prerequisites

To build and start the lighty.io gNMI RESTCONF application locally, you need:

  •  Java 11 or later
  •  Maven 3.5.4 or later

Custom Configuration

Before the lighty gNMI RESTCONF app creates a mount point for communicating with the gNMI device, it is necessary to create a schema context. This schema context is created, based on the YANG files which the device implements. These models are obtained via the gNMI Capability response, but only model names and versions are actually returned. Thus, we need some way of providing the content of the YANG model.

The way of providing content for the YANG file, so lighty.io gNMI RESTCONF can correctly create schema context, is to:

  • add a parameter to the RCGNMI app .json configuration
  • use upload-yang-model RPC

Both of these options will load the YANG files into the data-store, from which the ligthy.io gNMI RESTCONF reads the model, based on its name and version, obtained in the gNMI Capability response.

YANG Model Configuration as a Parameter

  1. Open the custom configuration example in src/main/resources/example_config.json
  2. Add custom gNMI configuration in root, next to the controller or RESTCONF configuration
 ```
      "gnmi": {
        "initialYangsPaths" : [
          "INITIAL_FOLDER_PATH"
        ]
      }
    ```
    3. Change `INITIAL_FOLDER_PATH`, from JSON block above, to folder path, which contains YANG models you wish to
    load into datastore. These models will be then automatically loaded on startup.

2) Add yang model with RPC request to running app
- 'YANG_MODEL' - Should have included escape characters before each double quotation marks character.
```
curl --request POST 'http://127.0.0.1:8888/restconf/operations/gnmi-yang-storage:upload-yang-model' \
--header 'Content-Type: application/json' \
--data-raw '{
    "input": {
        "name": "openconfig-interfaces",
        "semver": "2.4.3",
        "body": "YANG_MODEL"
    }
}'
```

Start the gNMI RESTCONF Example App

1. Build the project using:

mvn clean install

2. Go to the target directory:

cd lighty-rcgnmi-app/target

3. Unzip example application bundle:

unzip  lighty-rcgnmi-app-14.0.1-SNAPSHOT-bin.zip

4. Go to the unzipped application directory:

cd lighty-rcgnmi-app-14.0.1-SNAPSHOT

5. To start the application with a custom lighty.io configuration, use arg -c. For a custom initial log4j configuration, use the argument -l:

start-controller.sh -c /path/to/config-file -l /path/to/log4j-config-file

Using the gNMI RESTCONF Example App

Register Certificates

Certificates, used for connecting to a device, can be stored inside the lighty-gnmi data store. The certificate key and passphrase are encrypted before they are stored inside the data store.

After registering the certificate key and passphrase, it is not possible to get decrypted data back from the data store.

curl --request POST 'http://127.0.0.1:8888/restconf/operations/gnmi-certificate-storage:add-keystore-certificate' \
--header 'Content-Type: application/json' \
--data-raw '{
    "input": {
        "keystore-id": "keystore-id-1",
        "ca-certificate": "-----BEGIN CERTIFICATE-----
                              CA-CERTIFICATE
                          -----END CERTIFICATE-----",
        "client-key": "-----BEGIN RSA PRIVATE KEY-----
                                CLIENT-KEY
                      -----END RSA PRIVATE KEY-----",
        "passphrase": "key-passphrase",
        "client-cert": "-----BEGIN CERTIFICATE-----
                              CLIENT_CERT
                        -----END CERTIFICATE-----"
    }
}'

Remove Certificates

curl --location --request POST 'http://127.0.0.1:8888/restconf/operations/gnmi-certificate-storage:remove-keystore-certificate' \
--header 'Content-Type: application/json' \
--data-raw '{
    "input": {
        "keystore-id": "keystore-id-1"
    }
}'

Update Certificates

To update the already existing certificates, use the request for registering a new certificate with the keystore-id you wish to update.

Connecting a gNMI Device

To establish a connection and communication with the gNMI device via RESTCONF, one needs to add a new node to gnmi-topology. This is done by sending the appropriate requests (examples below) with a unique node-id.

The connection parameters are used to specify connection parameters and the client’s (lighty gNMI RESTCONF) way of authenticating.

The property connection-type is enum and can be set to two values:

  • INSECURE: Skip TLS validation with certificates
  • PLAINTEXT:  Disable TLS validation

When the device requires the client to authenticate with registered certificates, remove the connection-type property. Then, add the keystore-id property with the ID of the registered certificates.

If the device requires username/password validation, then fill username and password in the credentials container. This container is optional.

In case the device requires additional parameters in the gNMI request/response, there is a container called extensions-parameters, where a defined set of parameters can be optionally included in the gNMI request and response. Those parameters are:

  • overwrite-data-type is used to overwrite the type field of gNMI GetRequest.
  • use-model-name-prefix is used when the device requires a module prefix in the first element name of gNMI request path
  • path-target is used to specify the context of a particular stream of data and is only set in prefix for a path
curl --request PUT 'http://127.0.0.1:8888/restconf/data/network-topology:network-topology/topology=gnmi-topology/node=node-id-1' \
--header 'Content-Type: application/json' \
--data-raw '{
    "node": [
        {
            "node-id": "node-id-1",
            "connection-parameters": {
                "host": "127.0.0.1",
                "port": 9090,
                "connection-type": "INSECURE",
                "credentials": {
                    "username": "admin",
                    "password": "admin"
                }
            }
        }
    ]
}'

Create a Mountpoint with Custom Certificates

curl --request PUT 'http://127.0.0.1:8888/restconf/data/network-topology:network-topology/topology=gnmi-topology/node=node-id-1' \
--header 'Content-Type: application/json' \
--data-raw '{
    "node": [
        {
            "node-id": "node-id-1",
            "connection-parameters": {
                "host": "127.0.0.1",
                "port": 9090,
                "keystore-id": "keystore-id-1",
                "credentials": {
                    "username": "admin",
                    "password": "admin"
                }
            }
        }
    ]
}'

Get State of Registered gNMI Device

curl --request GET 'http://127.0.0.1:8888/restconf/data/network-topology:network-topology/topology=gnmi-topology/node=node-id-1'

[Example] RESTCONF gNMI GetRequest

curl --location --request GET 'http://127.0.0.1:8888/restconf/data/network-topology:network-topology/topology=gnmi-topology/node=node-id-1/yang-ext:mount/openconfig-interfaces:interfaces'

[Example] RESTCONF gNMI SetRequest

curl --request PUT 'http://127.0.0.1:8888/restconf/data/network-topology:network-topology/topology=gnmi-topology/node=node-id-1/yang-ext:mount/interfaces/interface=br0/ethernet/config' \
--header 'Content-Type: application/json' \
--data-raw '{
    "openconfig-if-ethernet:config": {
        "enable-flow-control": false,
        "openconfig-if-aggregate:aggregate-id": "admin",
        "auto-negotiate": true,
        "port-speed": "openconfig-if-ethernet:SPEED_10MB"
    }
}'

Disconnect gNMI Device

curl --request DELETE 'http://127.0.0.1:8888/restconf/data/network-topology:network-topology/topology=gnmi-topology/node=node-id-1'

[RESTCONF] gNMI Operations Mapping

The supported HTTP methods are listed below:

  • YANG Node Type GET POST
  • Configuration Data
  • YANG RPC
  • Non-Configuration Data
  • HTTP Method
  • POST, PUT, PATCH, DELETE, GET
  • GET
  • POST

For each REST request, the lighty.io gNMI  RESTCONF app invokes the appropriate gNMI operation GnmiSet/GnmiGet to process the request.

Below is the mapping of HTTP operations to gNMI operations:

  • HTTP Method
  • GET
  • POST
  • PATCH
  • PUT
  • DELETE
  • gNMI Operation
  • GnmiGet
  • GnmiGet
  • GnmiGet
  • GnmiGet
  • GnmiGet
  • Request Data
  • path
  • path, payload
  • path, payload
  • path, payload
  • path, payload
  • Response Data
  • status, payload
  • status
  • status
  • status
  • status

[RESTCONF] GET Method Mapping

In both cases, we will be reading data, but from different data stores.

  • Reading data from the operational datastore invokes readOperationalData() in GnmiGet:
src/main/java/io/lighty/gnmi/southbound/mountpoint/ops/GnmiGet.java
  • Reading data from the configuration datastore invokes readConfigurationData() in GnmiGet:
src/main/java/io/lighty/gnmi/southbound/mountpoint/ops/GnmiGet.java

[RESTCONF] PUT/POST/PATCH/DELETE Method Mapping

  • Sending data to operational/configuration datastore invokes method set() in GnmiSet:
src/main/java/io/lighty/gnmi/southbound/mountpoint/ops/GnmiSet.java

A list of input parameters come from method request in the form of fields of update messages: update, replace, and delete fields.

  • PUT/POST request method sends update messages through two fields: update and replaces fields
  • PATCH request method sends update messages through the update field
  • DELETE request method sends update messages through the delete field

Further Support & Questions

PANTHEON.tech open-sourced lighty.io a while ago, giving the community a unique chance to discover the power of lighty.io in their SDN solution.

If you require enterprise support, integration, or training, make sure to contact us so we can help you catch up with the future of networking.


by Martin Bugáň, Ivan Čaládi, Peter Šuňa & Marek Zaťko

lighty.io BGP EVPN Route Reflector featured image

[lighty.io] BGP EVPN Route-Reflector (2021)

June 22, 2021/in Blog, SDN /by PANTHEON.tech

In our previous blog post, we have introduced you to a Border Gateway Protocol Route-Reflector (BGP-RR) function in SDN controller based on lighty.io. In this article, we’re going to extend the BGP function of an SDN controller with an EVPN extension in the BGP control plane.

Functionality

This article will discuss BGP-EVPN functions in an SDN controller and how the lighty.io BGP function can replace existing legacy route-reflectors running in the service provider’s WAN/DC networks. BGP-EVPN provides:

  • Advanced Layer 2 MAC and Layer 3 IP reachability information capabilities in control-plane
  • Route-Type 2: advertising MAC/IP address, instead of traditional MAC learning mechanisms
  • Route-Type 5: advertising the IP prefix subnet prefix route

We’re going to show you a BGP-EVPN IP subnet routing use-case

A BGP-EVPN control plane can also co-exist with various data-planes, such as MPLS, VXLAN, and PBB.

Use-case: Telecom Data-Center

In this blog, we’re going to show you the BGP-EVPN control plane working together with the VXLAN data plane. The perfect use case for this combination would be a telecom data center.

Virtual Extensible LAN (VXLAN) is an overlay technology for network virtualization. It provides Layer 2 extension over a shared Layer 3 underlay infrastructure network, by using the MAC address in an IP/User Datagram Protocol (MAC in IP/UDP) tunneling encapsulation. The initial IETF VXLAN standards defined a multicast-based flood-and-learn VXLAN without a control plane.

It relies on data-based flood-and-learn behavior for remote VXLAN tunnel endpoint (VTEP) peer-discovery and remote end-host learning. BGP-EVPN, as the control plane for VXLAN, overcomes the limitations of the flood-and-learn mechanism.

Test Bed

Test Bed Visualization

In this demo, we will use:

  • five Docker containers & three Docker networks.
  • Docker auto-generated user-defined bridge networks with mask /16
  • Arista’s cEOS software, as we did in our previous demo

Remember, that an Arista cEOS switch creates an EtX port when starting up in the container, which is bridged to the EthX port in Docker.

These auto-generated EtX ports are accessible and configurable from cEOS Cli and on start are in default L2 switching mode. This means they don’t have an IP address assigned.

Well, let’s expand our previous demo topology with few more network elements. Here is a list of Docker containers used in this demo:

  • leaf1 & leaf2: WAN switch & access/node
  • host1 & host2: Ubuntu VM
  • BGP-RR: BGP-EVPN Route Reflector

Here is a list of Docker user-defined networks used in this demo:

  • net1 (172.18.0.0/16): connects leaf1 & host1
  • net2 (172.19.0.0/16): connects leaf2 & host2
  • net3 (172.20.0.0/16: connects leaf1, leaf2 & bgp-rr

Our Goal: Routing

At the end of this blog, we want to be able to reach IP connectivity between virtual machine host1 and host2. For that, we need BGP to advertise loopback networks and VLAN information between nodes.

In this example, we are using one AS-50.

To demonstrate route-reflector EVPN functionality leaf1 & leaf2 doesn’t make an IBGP pair but creates a pair with lighty-BGP instead. This will act as a route reflector. In VxLAN configuration we don’t set up flood vtep. This information should redistribute Route Routing to peers.

The container with lighty-BGP MUST NOT be used as a forwarding node since it doesn’t know the routing table.

Configuration

This demo configuration is prepared and tested on Ubuntu 18.04.2.

Docker Configuration

Before you start, please make sure that you have Docker (download instructions, use version 18.09.6 or higher) & Postman downloaded and installed.

1. Download lighty-BGP Docker image. PANTHEON.tech has its own

https://hub.docker.com/u/pantheontech
sudo docker pull pantheontech/lighty-rr:9.2.0-dev

2. Download the Docker image for Arista cEOS (v. 4.26.1F)

sudo docker import cEOS-lab.tar.xz ceosimage:4.26.1F

3. Download the Ubuntu image from DockerHub

sudo docker pull ubuntu:latest

4. Check the Docker images, successfully installed in the repository

sudo docker images

Preparing the Docker Environment

1. Create Docker networks

sudo docker network create net1
sudo docker network create net2
sudo docker network create net3

2. Check all Docker networks, that have been created

sudo docker network ls

3. Create containers in Docker

sudo docker create --name=bgp-rr --privileged -e INTFTYPE=eth -it pantheontech/lighty-rr:9.2.0-dev
sudo docker create --name=leaf1 --privileged -e INTFTYPE=eth -e ETBA=1 -e SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1 -e CEOS=1 -e EOS_PLATFORM=ceoslab -e container=docker -i -t ceosimage:4.26.1F /sbin/init systemd.setenv=INTFTYPE=eth systemd.setenv=ETBA=1 systemd.setenv=SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1 systemd.setenv=CEOS=1 systemd.setenv=EOS_PLATFORM=ceoslab systemd.setenv=container=docker
sudo docker create --name=leaf2 --privileged -e INTFTYPE=eth -e ETBA=1 -e SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1 -e CEOS=2 -e EOS_PLATFORM=ceoslab -e container=docker -i -t ceosimage:4.26.1F /sbin/init systemd.setenv=INTFTYPE=eth systemd.setenv=ETBA=1 systemd.setenv=SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1 systemd.setenv=CEOS=2 systemd.setenv=EOS_PLATFORM=ceoslab systemd.setenv=container=docker 
sudo docker create --privileged --name host1 -i -t ubuntu:latest /bin/bash
sudo docker create --privileged --name host2 -i -t ubuntu:latest /bin/bash

4. Connect containers to Docker networks

sudo docker network connect net1 leaf1
sudo docker network connect net1 host1
sudo docker network connect net2 leaf2
sudo docker network connect net2 host2
sudo docker network connect net3 bgp-rr
sudo docker network connect net3 leaf1
sudo docker network connect net3 leaf2

5. Start all containers

sudo docker start leaf1
sudo docker start leaf2
sudo docker start host1
sudo docker start host2
sudo docker start bgp-rr

6. Enable permanent IPv4 forwarding in cEOS containers

sudo docker exec -it leaf1 /sbin/sysctl net.ipv4.conf.all.forwarding=1
sudo docker exec -it leaf2 /sbin/sysctl net.ipv4.conf.all.forwarding=1

7. Check, whether all Docker containers have started successfully

sudo docker container ls

Optional: Use this, if you’re looking for detailed information about running Docker containers (X is replaced by device/host number)

sudo docker container inspect [leaf[X] | bgp-rr | host[X]]

Preparing Ubuntu Environment

1. Get into the machine (X is replaced by device/host number)

sudo docker exec -it host[X] bash

2. Update the machine

apt-get update

3. Install the required packages

apt-get install iproute2
apt-get install iputils-ping

4. Exit the Docker Container (CTRL+D). Repeat steps 2 & 3.

Arista cEOS Switch configuration

Now, we will configure Arista cEOS switches. We will split the configuration of Arista cEOS Switches into several steps.

Click here for full configurations of Arista switches ‘leaf1‘ & ‘leaf2‘.

Ethernet interfaces & connectivity check

1. Go into the Arista switch leaf1

sudo docker exec -it leaf1 Cli

2. Set Privilege, and go to configure-mode

enable
configure terminal

3. Setup the switch’s name

hostname leaf1

4. Setup Ethernet interface. If you use more devices than your devices could be connected to another Ethernet

interface ethernet 2
no switchport
ip address 172.20.0.2/16

5. Check if BGP-RR is reachable from the configured interface.

  • When you can’t ping ‘BGP-RR’, check if ‘leaf1′ and ‘BGP-RR’ are located in the same Docker network, or delete the previous step and try it on another Ethernet interface.
ping 172.20.0.4 source ethernet2

6. Repeat Step 5 for ‘leaf2′ & go into the Arista switch leaf2

sudo docker exec -it leaf2 Cli
enable
config t
hostname leaf2
interface ethernet 2 
no switchport
ip address 172.20.0.3/16
ping 172.20.0.4 source ethernet2

Configuring the Border Gateway Protocol

We will have identical configurations for ‘leaf1′ & ‘leaf2′. Exceptions will be highlighted in the instructions below.

1. Enable BGP in Arista switch

  • If you are still in the previous settings interface, go to the root of the Arista configuration by repeating the “exit” command.
service routing protocols model multi-agent
ip routing

2. Setup

  • For ‘leaf2’, use the Router-ID ‘router-id 172.20.0.3‘
router bgp 50
router-id 172.20.0.2
neighbor 172.20.0.4 remote-as 50
neighbor 172.20.0.4 next-hop-self
neighbor 172.20.0.4 send-community extended
redistribute connected
redistribute attached-host

3. Setup EVPN in BGP

address-family evpn
neighbor 172.20.0.4 activate

Configuring VxLAN Interface & VLAN

We will have identical configurations for leaf1 & leaf2. Exceptions will be highlighted in the instructions below.

1. Enable VLAN with ID 10.

  • Make sure that this command is typed in the root of Arista and not in BGP
  • If you are still in the BGP configuration, use the command ‘exit’
vlan 10

2. Configure loopback 0, which will be used as a VTEP (VxLAN tunnel endpoint) for VxLAN.

  • In ‘leaf2’, use IP ‘10.10.10.2/32’, instead of IP ‘10.10.10.1/32’
interface loopback 0
ip address 10.10.10.1/32

3. Configure VxLAN Interface

  • Here we’ll set up loopback 0 as a VTEP and configure VNI (VXLAN Network Identifier) to 3322.
interface Vxlan1
vxlan source-interface Loopback0
vxlan udp-port 4789
vxlan vlan 10 vni 3322
vxlan learn-restrict any

4. Assign Ethernet interface to VLAN

interface Ethernet 1
switchport mode access
switchport access vlan 10

5. Share loopback 0 to BGP-RR

  • In ‘leaf2‘, use IP ‘10.10.10.2/32’ instead of ‘10.10.10.1/32’
router bgp 50
address-family ipv4
network 10.10.10.1/32

6. Configure VLAN in BGP

  • Here we share the information about VLAN to BGP-RR
router bgp 50
vlan 10
rd 50:3322
route-target both 10:3322
redistribute learned

7. Save your configuration with the ‘wr‘ command in both Arista devices and restart them with the command:

sudo docker restart leaf1 leaf2

lighty.io & BGP Route Reflector

In this part, we will add the Border Gateway Protocol configuration into the lighty.io BGP.

There is a lot to configure, so crucial parts are commented to break it down a little.

If we want to see the logs from lighty.io, we can attach them to the started container:

sudo docker attach bgp-rr

We can start the BGP-RR container with the command:

sudo docker start bgp-rr --attach

to see logs from the beginning. Afterward, send a PUT request to BGP-RR. We should see the following messages in the logs.

More RESTCONF commands can be found here.

Verify device state

Now, we will check if all configurations were set up successfully. We will also check if VxLAN is created and the Virtual PCs can ‘ping’ each other.

1. Check if EVPN BGP peering is established

leaf1(config)#sh bgp evpn summary
BGP summary information for VRF default
Router identifier 172.20.0.2, local AS number 50
Neighbor Status Codes: m - Under maintenance
  Neighbor         V  AS           MsgRcvd   MsgSent  InQ OutQ  Up/Down State  PfxRcd PfxAcc
  172.20.0.4       4  50                 3         6    0    0 00:00:09 Estab  0      0
leaf2(config)#sh bgp evpn summary
BGP summary information for VRF default
Router identifier 172.20.0.3, local AS number 50
Neighbor Status Codes: m - Under maintenance
  Neighbor         V  AS           MsgRcvd   MsgSent  InQ OutQ  Up/Down State  PfxRcd PfxAcc
  172.20.0.4       4  50               267       315    0    0 00:01:16 Estab  1      1

If your devices are in the state ‘Connected‘ or ‘Active‘, then you have checked this right after you sent a request to lighty.io. Usually, it takes, at most, one minute to establish a connection.

If you still see this state, then there could be something wrong with the BGP configuration. Please check your configuration in Arista CLI, by typing the command ‘show running-config‘ and compare it with the full Arista configuration above.

After you verify the Arista configuration, then there could be a problem in the BGP-RR container. This can be fixed by restarting the BGP-RR container.

2. Check ip route for available loopbacks from other devices

leaf1(config)#sh ip route
VRF: default
Codes: C - connected, S - static, K - kernel,
       O - OSPF, IA - OSPF inter area, E1 - OSPF external type 1,
       E2 - OSPF external type 2, N1 - OSPF NSSA external type 1,
       N2 - OSPF NSSA external type2, B I - iBGP, B E - eBGP,
       R - RIP, I L1 - IS-IS level 1, I L2 - IS-IS level 2,
       O3 - OSPFv3, A B - BGP Aggregate, A O - OSPF Summary,
       NG - Nexthop Group Static Route, V - VXLAN Control Service,
       DH - DHCP client installed default route, M - Martian,
       DP - Dynamic Policy Route
 
Gateway of last resort is not set
 
 C      10.10.10.1/32 is directly connected, Loopback0
 B I    10.10.10.2/32 [200/0] via 172.20.0.3, Ethernet2
 C      172.20.0.0/16 is directly connected, Ethernet2
leaf2(config)#sh ip route
VRF: default
Codes: C - connected, S - static, K - kernel,
       O - OSPF, IA - OSPF inter area, E1 - OSPF external type 1,
       E2 - OSPF external type 2, N1 - OSPF NSSA external type 1,
       N2 - OSPF NSSA external type2, B I - iBGP, B E - eBGP,
       R - RIP, I L1 - IS-IS level 1, I L2 - IS-IS level 2,
       O3 - OSPFv3, A B - BGP Aggregate, A O - OSPF Summary,
       NG - Nexthop Group Static Route, V - VXLAN Control Service,
       DH - DHCP client installed default route, M - Martian,
       DP - Dynamic Policy Route
 
Gateway of last resort is not set
 
 B I    10.10.10.1/32 [200/0] via 172.20.0.2, Ethernet2
 C      10.10.10.2/32 is directly connected, Loopback0
 C      172.20.0.0/16 is directly connected, Ethernet2

3. Check the VxLAN interface, if it creates and contains VTEP

leaf1#sh interfaces vxlan 1
Vxlan1 is up, line protocol is up (connected)
  Hardware is Vxlan
  Source interface is Loopback0 and is active with 10.10.10.1
  Replication/Flood Mode is headend with Flood List Source: EVPN
 Remote MAC learning via EVPN
  VNI mapping to VLANs
  Static VLAN to VNI mapping is
    [10, 3322]      
  Note: All Dynamic VLANs used by VCS are internal VLANs.
        Use 'show vxlan vni' for details.
  Static VRF to VNI mapping is not configured
  Headend replication flood vtep list is:
    10 10.10.10.2    
  VTEP address mask is None
leaf2(config)#sh interfaces vxlan 1
Vxlan1 is up, line protocol is up (connected)
  Hardware is Vxlan
  Source interface is Loopback0 and is active with 10.10.10.2
  Replication/Flood Mode is headend with Flood List Source: EVPN
 Remote MAC learning via EVPN
  VNI mapping to VLANs
  Static VLAN to VNI mapping is
    [10, 3322]      
  Note: All Dynamic VLANs used by VCS are internal VLANs.
        Use 'show vxlan vni' for details.
  Static VRF to VNI mapping is not configured
  Headend replication flood vtep list is:
    10 10.10.10.1    
  VTEP address mask is None

If you don’t see IP in the section ‘Headend replication flood vtep list is:‘, then the BGP-RR container is not started correctly. This problem can be fixed by removing the BGP-RR container and starting it again.

Restarting BGP-RR container

1. Stop the container

sudo docker stop bgp-rr

2. Remove BGP-RR container

sudo docker rm bgp-rr

3. Create a new container

sudo docker create --name=bgp-rr --privileged -e INTFTYPE=eth -it pantheontech/lighty-rr:9.2.0-dev

4. Connect BGP-RR to docker network

sudo docker network connect net3 bgp-rr

5. Start the container again

sudo docker start bgp-rr

Optional: If you want to see logs from light.io, attached them to the container:

sudo docker attach bgp-rr

Testing IP Connectivity

If everything worked out, we can test IP connectivity in a virtual PC.

1. Open Virtual PC host1

sudo docker exec -it host1 bash

2. Setup IP address for this device

ip addr add 31.1.1.1/24 dev eth1

3. Perform the same configuration at host2

sudo docker exec -it host1 bash
ip addr add 31.1.1.2/24 dev eth1

4. Try to ping host2 to host1

ping 31.1.1.1
root@e344ec43c089:/# ip route
default via 172.17.0.1 dev eth0
31.1.1.0/24 dev eth1 proto kernel scope link src 31.1.1.2
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.5
172.19.0.0/16 dev eth1 proto kernel scope link src 172.19.0.3
 
root@e344ec43c089:/# hostname -I
172.17.0.5 172.19.0.3 31.1.1.2
 
root@e344ec43c089:/# ping 31.1.1.1
PING 31.1.1.1 (31.1.1.1) 56(84) bytes of data.
64 bytes from 31.1.1.1: icmp_seq=1 ttl=64 time=114 ms
64 bytes from 31.1.1.1: icmp_seq=2 ttl=64 time=55.5 ms
64 bytes from 31.1.1.1: icmp_seq=3 ttl=64 time=53.0 ms
64 bytes from 31.1.1.1: icmp_seq=4 ttl=64 time=56.1 ms
^C
--- 31.1.1.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 53.082/69.892/114.757/25.929 ms

When we go back to the Arista switch, we can check routed MAC address information.

leaf1#sh mac address-table
          Mac Address Table
------------------------------------------------------------------
 
Vlan    Mac Address       Type        Ports      Moves   Last Move
----    -----------       ----        -----      -----   ---------
  10    0242.211d.8954    DYNAMIC     Et1        1       0:00:54 ago
  10    0242.8b29.b7ea    DYNAMIC     Vx1        1       0:00:40 ago
  10    0242.ac12.0003    DYNAMIC     Et1        1       0:00:14 ago
  10    0242.ac13.0003    DYNAMIC     Vx1        1       0:00:13 ago
  10    ce9a.ca0c.88a1    DYNAMIC     Et1        1       0:00:54 ago
Total Mac Addresses for this criterion: 5
 
          Multicast Mac Address Table
------------------------------------------------------------------
 
Vlan    Mac Address       Type        Ports
----    -----------       ----        -----
Total Mac Addresses for this criterion: 0
leaf2#sh mac address-table
          Mac Address Table
------------------------------------------------------------------
 
Vlan    Mac Address       Type        Ports      Moves   Last Move
----    -----------       ----        -----      -----   ---------
  10    0242.211d.8954    DYNAMIC     Vx1        1       0:00:48 ago
  10    0242.8b29.b7ea    DYNAMIC     Et1        1       0:01:03 ago
  10    0242.ac12.0003    DYNAMIC     Vx1        1       0:00:22 ago
  10    0242.ac13.0003    DYNAMIC     Et1        1       0:00:22 ago
  10    ce9a.ca0c.88a1    DYNAMIC     Vx1        1       0:00:48 ago
Total Mac Addresses for this criterion: 5
 
          Multicast Mac Address Table
------------------------------------------------------------------
 
Vlan    Mac Address       Type        Ports
----    -----------       ----        -----
Total Mac Addresses for this criterion: 0

Conclusion

We have successfully shown the lighty.io BGP functionality, which can replace legacy Route-Reflectors. This situation can be applied to telecom data centers and other use-cases. It demonstrates lighty.io’s versatility and usability. Contact us for more information!

Peter Šuňa & Peter Lučanský


You can contact us at https://pantheon.tech/

Explore our Pantheon GitHub.

Watch our YouTube Channel.

What is Network Fabric?

[What Is] Network Fabric: Automation & Monitoring

June 9, 2021/in Blog, OpenDaylight, SDN /by PANTHEON.tech

Network fabric describes a mesh network topology with virtual or physical network elements, forming a single fabric.

What is it?

This trivial metaphor does not do justice to the industry term, which describes the performance and functionality of mostly L2 & L3 network topologies. For nodes to be interconnected and reach equal connectivity between each other, the term network fabric (NF) completely omits L1 (trivial) networks.

Primary performance goals include:

  • Abundancy – sufficient bandwidth should be present, so each node achieves equal speed when communicating in the topology
  • Redundancy – a topology has enough devices, to guarantee availability and failure coverage
  • Latency – as low as it can get

For enterprises with a lot of different users and devices connected via a network, maintaining a network fabric is essential to keep up with policies, security, and diverse requirements for each part of a network.

A network controller, like OpenDaylight, or lighty.io, would help see the entire network as a single device – creating a fabric of sorts.

Types & Future

A network topology would traditionally consist of hardware devices – access points, routers, or ethernet switches. We recognize two modern variants:

  1. Ethernet NF – an ethernet, which recognizes all components in a network, like resources, paths & nodes.
  2. IP Fabric – utilizes BGP as a routing protocol & EVPN as an overlay

The major enabler of modernizing networking is virtualization, resulting in virtual network fabric. 

Virtualization (based on the concept of NFVs – network function virtualization), replaces hardware in a network topology with virtual counterparts. This in turn enables:

  • Reduced security risks & errors
  • Improved network scaling
  • Remote maintenance & support

lighty.io: Network Fabric Management & Automation

Migrating to a fabric-based, automated network is easy with PANTHEON.tech.

lighty.io provides a versatile & user-friendly SDN controller experience, for your virtualized NF.

With ease-of-use in mind and powered by JavaSE, lighty.io is the ideal companion for your NF virtualization plans.

Try lighty.io for free!

Network controllers, such as lighty.io, help you create, configure & monitor the NF your business requires.

If OpenDaylight is your go-to platform for network automation, you can rely on PANTHEON.tech to provide the best possible support, training, or integration.

PANTHEON.tech: OpenDaylight Services

 

OpenDaylight Performance Testing

Ultimate OpenDaylight Performance Testing

May 18, 2021/in Blog, OpenDaylight /by PANTHEON.tech

by Martin Baláž | Subscribe to our newsletter!

PANTHEON.tech has contributed to another important milestone for the ODL community – OpenDaylight Performance Testing.

You might have seen our recent contribution to the ONAP CPS component, were focused on performance testing as well. Our team worked tirelessly on enabling the OpenDaylight community to test the performance of their NETCONF implementation. More on that below.

NETCONF Performance Testing

To be able to manage hundreds or thousands of NETCONF enabled devices without any slowdown, performance plays a crucial role. The time needed to process requests regarding NETCONF devices causes additional latency in network workflow, therefore the controller needs to be able to process all incoming requests as fast as possible.

What is NETCONF?

The NETCONF protocol is a fairly simple mechanism, throughout which network devices can be easily managed. Also, configuration data information can be uploaded, edited, and retrieved as well.

NETCONF enables device exposure through a formal API (application programming interface). The API is then used by applications to send/receive configuration data sets either in full or partial segments.

The OpenDaylight controller supports the NETCONF protocol in two roles:

  • as a server (Northbound plugin)
  • as a client (Southbound plugin)

NETCONF & RESTCONF in OpenDaylight

The Northbound plugin is an alternative interface for MD-SAL. It gives users the capability to read and write data from the MD-SAL data store, to invoke its RPCs.

The Southbound plugin’s capability lies in connecting towards remote NETCONF devices. It exposes their configuration or operational datastores, RPCs, or notifications, as MD-SAL mounting points.

Mount points then allow applications or remote users, to interact with mounted devices via RESTCONF.

Scalability Tests

Scalability testing is a technique of measuring system reactions in terms of performance, with gradually increased demands. It expresses how well the system can undertake an increased amount of requests, and if upgrading computer hardware improves the overall performance. From the perspective of data centers, it is a very important property.

It is frequent. that the number of customers or amount of requests increases over time and the OpenDaylight controller needs to adapt to be able to cope with it.

Test Scenarios

There are four test scenarios. These scenarios involve both NETCONF plugins, northbound and southbound. Each of them is examined from the perspective of scalability. During all tests, the maximum OpenDaylight heap space was set to 8GB.

The setup we used was OpenDaylight Aluminium, with two custom changes (this and that). These are already merged in the newest Silicon release.

Southbound: Maximum Devices Test

The main goal of this test is to measure how many devices can be connected to the controller with a limited amount of heap memory. Simulated devices were initialized with the following set of YANG models:

  • ietf-netconf-monitoring 
  • ietf-netconf-monitoring-extension  (OpenDaylight extensions to ietf-netconf-monitoring)
  • ietf-yang-types
  • ietf-inet-types

Devices were connected by sending a large batch of configurations, with the ultimate goal of connecting as many devices as soon as possible, without waiting for the previous batch of devices to be fully connected.

The maximum number of NETCONF devices is set to 47.000. It is based on the fact, that ports used by NETCONF devices start at the value of 17.830 and gradually use up ports to the maximum value of ports on a single host – which is 65.535. This range contains 47.705 possible ports.

Heap Size Connection Batch Size TCP Max Devices TCP Execution Time SSH Max Devices SSH Execution time
2GB 1k 47 000* 14m 23s 26 000 11m 5s
2GB 2k 47 000* 14m 21s 26 000 11m 12s
4GB 1k 47 000* 13m 26s 47 000* 21m 22s
4GB 2k 47 000* 13m 17s 47 000* 21m 19s

Table 1– Southbound scale test results

*- reached the maximum number of created simulated NETCONF devices, while running all devices on localhost


Northbound: Performance Test

This test tries to write l2fibs entries (ncmount-l2fib@2016-03-07.yang modeled) to the controller’s datastore, through the NETCONF Northbound plugin, as fast as possible.

Requests were sent two ways:

  • Synchronously: Each next request was sent, after receiving an answer for the previous request.
  • Asynchronously:  Sending a request as fast as possible, without waiting for a response for any previous request. The time spent processing requests was calculated as a time interval between sending the first request and receiving a response for the last request.
Clients Client type l2fib/req total l2fibs TCP performance SSH performance
1 Sync 1 100 000 1 413 requests/s

1 413 fibs/s

887 requests/s

887 fibs/s

1 Async 1 100 000 3 422 requests/s

3 422 fibs/s

3 281 requests/s

3 281 fibs/s

1 Sync 100 500 000 300 requests/s

30 028 fibs/s

138 requests/s

13 810 fibs/s

1 Async 100 500 000 388 requests/s

38 844 fibs/s

378 requests/s

37 896 fibs/s

1 Sync 500 1 000 000 58 requests/s

29 064 fibs/s

20 requests/s

10 019 fibs/s

1 Async 500 1 000 000 83 requests/s

41 645 fibs/s

80 requests/s

40 454 fibs/s

1 Sync 1 000 1 000 000 33 requests/s

33 230 fibs/s

15 requests/s

15 252 fibs/s

1 Async 1 000 1 000 000 41 requests/s

41 069 fibs/s

39 requests/s

39 826 fibs/s

8 Sync 1 400 000 8 750 requests/s

8 750 fibs/s

4 830 requests/s

4 830 fibs/s

8 Async 1 400 000 13 234 requests/s

13 234 fibs/s

5 051 requests/s

5 051 fibs/s

16 Sync 1 400 000 9 868 requests/s

9 868 fibs/s

5 715 requests/s

5 715 fibs/s

16 Async 1 400 000 12 761 requests/s

12 761 fibs/s

4 984 requests/s

4 984 fibs/s

8 Sync 100 1 600 000 573 requests/s

57 327 fibs/s

366 requests/s

36 636 fibs/s

8 Async 100 1 600 000 572 requests/s

57 234 fibs/s

340 requests/s

34 044 fibs/s

16 Sync 100 1 600 000 545 requests/s

54 533 fibs/s

355 requests/s

35 502 fibs/s

16 Async 100 1 600 000 542 requests/s

54 277 fibs/s

328 requests/s

32 860 fibs/s

Table 2 – Northbound performance test results


Northbound: Scalability Tests

In terms of scalability, the NETCONF Northbound plugin was tested from two perspectives.

First, how well can OpenDaylight sustain performance (number of processed requests per second), while increasing the total amount of sent requests? Tests were executed in both variants, sending requests synchronously and also asynchronously.

In this scenario, it is desired, that the performance would be held around a constant value during all test cases.

Requests count scalability synchronous

Diagram 1: NETCONF Northbound requests count scalability (synchronous)

Requests count - scalability (asynchronous)

Diagram 2: NETCONF Northbound requests count scalability (asynchronous)

In the second case, we examined, how much time is needed to process all requests, affected by gradually increased request size (amount of elements sent within one request).

It is desired, that the total time needed to process all requests would be equal, or smaller, than the direct proportion of request size.

Request size - scalability (synchronous)

Diagram 3: NETCONF Northbound request size scalability (synchronous)

Request size - scalability (asynchronous)

Diagram 4: NETCONF Northbound request size scalability (asynchronous)


Southbound: Performance Test

The purpose of this test is to measure, how many notifications, containing prefixes, can be received within one second.

All notifications were sent from a single NETCONF simulated device. No further processing of these notifications was done, except for counting received notifications, which was needed to calculate the performance results.

The model of these notifications is example-notifications@2015-06-11.yang.  The time needed to process notifications is calculated as the time interval between receiving first the notification and receiving the last notification.

All notifications are sent asynchronously, while there are no responses for NETCONF notifications.

Prefixes/Notifications Total Prefixes TCP Performance  SSH Performance
1 100 000 4 365 notifications/s

4 365 prefixes/s

4 432 notifications/s

4 432 prefixes/s

2 200 000 3 777 notifications/s

7 554 prefixes/s

3 622 notifications/s

7 245 prefixes/s

10 1 000 000 1 516 notifications/s

15 167 prefixes/s

1 486 notifications/s

14 868 prefixes/s

Table 3 – Southbound performance test results


Southbound: Scalability Tests

Scalability tests for the Southbound plugin were executed similarly to tests from the Northbound plugin – running both scenarios. Results are calculated by examining changes in performance, caused by an increasing amount of notifications and the total time needed, to process all notifications, while increasing the number of entries per notification.

Notifications count - scalability

Diagram 5: NETCONF Southbound notifications count scalability

Notification size - scalability

Diagram 6: NETCONF Southbound notifications size scalability


OpenDaylight E2E Performance Test

In this test, the client tries to write vrf-routes (modeled by Cisco-IOS-XR-ip-static-cfg@2013-07-22.yang) to NETCONF enabled devices via the OpenDaylight controller.

It sends vrf-routes via RESTCONF to the controller, using the specific RPC ncmount:write-routes. The controller is responsible for storing these data into the simulated devices, via NETCONF.

Requests were sent two ways:

  • Synchronously: when each request was sent after receiving an answer for the previous request
  • Asynchronously: sending multiple requests as fast as possible, while maintaining the maximum number of 1000 concurrent pending requests, for which response has not yet been received.
Clients Client type prefixes/request total prefixes TCP performance SSH performance
1 Sync 1 20 000 181 requests/s

181 routes/s

99 requests/s

99 routes/s

1 Async 1 20 000 583 requests/s

583 routes/s

653 requests/s

653 routes/s

1 Sync 10 200 000 127 requests/s

1 271 routes/s

89 requests/s

892 routes/s

1 Async 10 200 000 354 requests/s

3 546 routes/s

3 44 requests/s

3 444 routes/s

1 Sync 50 1 000 000 64 requests/s

3 222 routes/s

44 requests/s

2 209 routes/s

1 Async 50 1 000 000 136 requests/s

6 812 routes/s

138 requests/s

6 920 routes/s

16 Sync 1 20 000 1 318 requests/s

1 318 routes/s

424 requests/s

424 routes/s

16 Async 1 20 000 1 415 requests/s

1 415 routes/s

1 131 requests/s

1 131 routes/s

16 Sync 10 200 000 1 056 requests/s

10 564 routes/s

631 requests/s

6313  routes/s

16 Async 10 200 000 1 134 requests/s

11 340 routes/s

854 requests/s

8 540 routes/s

16 Sync 50 1 000 000 642 requests/s

32 132 routes/s

170 requests/s

8 519 routes/s

16 Async 50 1 000 000 639 requests/s

31 953 routes/s

510 requests/s

25 523 routes/s

32 Sync 1 320 000 2 197 requests/s

2 197 routes/s

921 requests/s

921 routes/s

32 Async 1 320 000 2 266 requests/s

2 266 routes/s

1 868 requests/s

1 868 routes/s

32 Sync 10 3 200 000 1 671 requests/s

16 713 routes/s

697 requests/s

6 974 routes/s

32 Async 10 3 200 000 1 769 requests/s

17 696 routes/s

1 384 requests/s

13 840 routes/s

32 Sync 50 16 000 000 797 requests/s

39 854 routes/s

356 requests/s

17 839 routes/s

32 Async 50 16 000 000 803 requests/s

40 179 routes/s

616 requests/s

30 809 routes/s

64 Sync 1 320 000 2 293 requests/s

2 293 routes/s

1 300 requests/s

1 300 routes/s

64 Async 1 320 000 2 280 requests/s

2 280 routes/s

1 825 requests/s

1 825 routes/s

64 Sync 10 3 200 000 1 698 requests/s

16 985 routes/s

1 063 requests/s

10 639 routes/s

64 Async 10 3 200 000 1 709 requests/s

17 092 routes/s

1 363 requests/s

13 631 routes/s

64 Sync 50 16 000 000 808 requests/s

40 444 routes/s

563 requests/s

28 172 routes/s

64 Async 50 16 000 000 809 requests/s

40 456 routes/s

616 requests/s

30 847 routes/s

Table 4 – E2E performance test results

E2E Scalability Tests 

These tests were executed just like the previous scale test cases – by increasing the number of requests and request size.

Requests count - scalability (synchronous)
Requests count - scalability (synchronous)
Request count - scalelability (asynchronous)
Request count - scalelability (asynchronous)
Request size - scalability (synchronous)
Request size - scalability (synchronous)
Request size - scalability (asynchronous)
Request size - scalability (asynchronous)

Conclusion

The test results show good scalability of OpenDaylight in terms of keeping almost constant performance while processing larger requests and the ability to process a growing size of requests without decreasing final performance too much.

The only exceptions were cases when requests were sent synchronously using SSH protocol. There is a sudden, significant increase in processing time when request size exceeds the value of 100. The maximum number of connected devices shows good results within the ability to connect more than 47 000 devices with 4GB of RAM and 26 000 devices with 2GB of RAM.

By using the TCP protocol, those numbers are even higher. TCP protocol, in comparison with SSH, results as the faster one, but at the cost of many advantages that the SSH protocol brings, like data encryption, which would be critical for companies, which needs to keep their data safe.

Examining differences in performance between SSH and TCP protocol is part of further investigation and more parts on Performance Testing in OpenDaylight, so stay tuned and subscribed!

firewall onap

Cloud-Native Firewall + ONAP (CDS) Integration

April 26, 2021/in Blog, CDNF.io /by PANTHEON.tech

PANTHEON.tech’s Firewall CNF can be integrated with the ONAP Controller Design Studio (CDS) component.

We achieved a successful & effective integration with the Firewall CNF and CDS, in an easy-to-understand use-case: block and allow traffic between two Docker containers.

Cloud-Native Firewall & CDS

With ONAP, orchestration management and automation of network services is simple, yet effective. It allows defining policies and act on network changes in real-time.

With CDS, users can configure other ONAP components as well – such as SDN-C or SDN-R, and thereby directly configure the network itself.

CDS is responsible for designing and controlling self-services – a fully self-defined software system. It makes these self-services so accessible, that minimal to no code development is required. It is usable also by non-programmers.

CDS in ONAP

Position of CDS within the ONAP architecture

Self-contained services are defined by a Controller Blueprint Archive (CBA). The core of the CBA structure defines the service, according to TOSCA – the topology and orchestration specification for cloud applications. These blueprints are modeled, enriched to become fully self-contained TOSCA blueprints, and uploaded to CDS.

ONAP Demo Simplification

Our VPP-Agent-based Firewall CNF can be configured using CDS and afterward, effectively blocks or allows traffic between two Alpine Linux containers.

The workflow of applying a configuration to our Firewall CNF is comprised of two steps:

  1. Resolve the configuration template
  2. Apply the resolved configuration to the CNF, using the REST API

This shows the versatility and agility of our CNFs, by showcasing another possible integration in a popular project, such as ONAP.

Try our Firewall CNF + CDS Demo

This demonstration is available on our GitHub!

The script in our demonstration provides a setup, where necessary containers are started and the data plane and control plane are brought in place.

The script will then showcase traffic (pinging) from the start point to endpoint in three scenarios:

  1. Firewall CNF is not configured
  2. Firewall CNF is configured by CDS to deny traffic
  3. Firewall CNF is configured by CDS to allow traffic

PANTHEON.tech & ONAP

PANTHEON.tech is closely involved and following the development of various ONAP components.

The CPS component is of crucial importance in the ONAP project since it serves as a common data layer service, which preserves network-element runtime information, in form of database functionality.

PANTHEON.tech’s involvement in ONAP CPS includes creating an easy and common platform for testing deployments easier which highlights, where optimization is needed or achieved.

We hope you enjoyed this demonstration!


Make sure to visit our cloud-network functions (CNF) portfolio!

by Filip Gschwandtner | Leave us your feedback on this post!

You can contact us here.

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

CPS Performance Test ONAP

PANTHEON.tech Introduces CPS Performance Testing to ONAP

April 15, 2021/in Blog /by PANTHEON.tech

As part of our commitment to improve & develop ONAP functionality, PANTHEON.tech has introduced Performance Testing to the ONAP Configuration Persistence Service (CPS) component.

The test flow included the following operations:

  • Create a new anchor with a unique name in the given dataspace
  • Create data node – full data tree upload for a given anchor
  • Update data node – node fragment replacement
  • Remove anchor (and associated data)

This Performance Testing will make testing deployments easier and show, whether the optimization is needed or achieved.

You can download the first-ever, CPS Performance Testing report here:

CPS PDF Download

CPS Performance Test by PANTHEON.tech

Send download link to:

I confirm that I have read and agree to the Privacy Policy.

Subscribe to our monthly flash-news! I have read and agree to the Privacy Policy.

What is CPS in ONAP?

The Configuration Persistence Service component serves as a common data layer service, which preserves network-element runtime information, in form of database functionality. This runtime data, or information, needs to be persistent, so CPS provides a data repository for this data – this can include operational data.

CPS Performance Testing Environment

CPS Performance Testing Environment

Businesses may rely on the ability to visualize and manage this data in their RAN network. So essentially, the goal of CPS is to improve the operation of data handling within ONAP – with better, efficient data layer services.

Use-cases for CPS are universal since the project is able to be utilized in Edge or core ONAP deployments, where a database is deployed with each installation. Proposed use-cases also include Edge-2-Edge Network Slicing. Not to mention the OPEX you will be saving on.

Our Commitment to Open-Source

Yes, we are the largest contributor to OpenDaylight. But we also contribute code to FD.io VPP or ONAP, amongst others. We see open-source as “a philosophy of freedom, meaningfulness, and the idea that wisdom should be shared”, as we mentioned in another post. And we will continue to work with the wonderful communities of projects we have close at heart.


by Ruslan Kashapov | Leave us your feedback on this post!

You can contact us here.

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

lighty.io RNC Application

Manage Network Elements in SDN | lighty.io RNC

April 9, 2021/in Blog, OpenDaylight /by PANTHEON.tech

What if I told you, that there is an out-of-the-box pre-packaged microservice-ready application you can easily use for managing network elements in your SDN use case? And that it is open-sourced and you can try it for free? Yep, you heard it right.

The application consists of lighty.io modules packed together within various technologies – ready to be used right away.

Do you have a more complex deployment, and are using Helm to deploy into Kubernetes? Or you just need to use Docker images? Or you want to handle everything by yourself and the only thing you need is a runnable application? We got you covered.

lighty.io RESTCONF-NETCONF Application

The most common use case we see at our customers is for an SDN controller to handle NETCONF devices via REST endpoints. This is due to ease of integration to e.g. OSS and BSS systems, or ITSM systems, as these already have REST API interfaces and adapters.

This is where our first lighty.io application comes in – the lighty.io RNC application, where RNC stands for RESTCONF-NETCONF-controller.

Use Cases: Facilitate & Translate Network Device Communication

Imagine a scenario, where the ONAP Controller Design Studio (CDS) component needs to communicate with both RESTCONF & NETCONF devices.

lighty.io RESTCONF-NETCONF Controller enables and facilitates communication to both RESTCONF/NETCONF devices while translating communication both ways!

Its usability and features can save you time and resources in a variety of telco related scenarios:

  • Data-Centers
  • OSS/BSS Integration (w/ NETCONF speaking devices & appliances)
  • Service Provider Networks (Access, Edge, etc.)
  • Central Office

Components

As the name suggests, it includes the RESTCONF northbound plugin and NETCONF southbound plugin at the bottom of the lighty.io controller.

At the heart of the application is the lighty.io controller. It provides core OpenDaylight services like MD-SAL, datastores, YANG Tools, handles global schema context, and more.

NETCONF southbound plugin serves as an adapter for NETCONF devices. It allows lighty.io to connect and communicate with them, execute RPCs, and read/write configuration.

RESTCONF northbound plugin is responsible for RESTCONF endpoints. These are used for communication between a user (or another application, like the aforementioned OSS/BSS systems, workflow managers, or ServiceNow for example) and the lighty.io application. RESTCONF gives us access to the so-called mount points serving as a proxy to devices.

These three mentioned components make up the core of the lighty.io RNC Application. Together, they form a base of the application. But of course, there is no such thing as one solution to rule them all.

Oftentimes, there is a need for side-car functionalities to the RNC, that is best built bespoke, that fulfill some custom business logic. Or enhance the RESTCONF API endpoints with side-load data.

We provide the means to customize and configure the lighty.io RNC application via configuration files to better fit your needs.

And if there is something we didn’t cover, do not hesitate to contact us or create a Pull Request or issue in our GitHub repository. We provide commercial custom development, developer, and operational support to enhance your efforts.

Configuration

You can find some common options in the JSON configuration file, like:

  • what address and port is RESTCONF listening to
  • what is the base URL of the RC endpoints to RESTCONF endpoints
  • what is the name of the network topology where NETCONF is listening
  • which YANG models should be available in the lighty.io app itself
  • and more

But wait! There is more!

There are some special configurations too with a bit bigger impact

One of them is an option to enable HTTPS for RESTCONF endpoints. When useHttps is set to true, HTTPS will be enabled. It is possible to specify a custom key-store too and we recommend doing so. But just for some tests default keystore should be more than enough.

The option enableAAA is used to enable the lighty-aaa module. This module is responsible for authorization, authentication, and accounting which for example enables to use Basic Authentication for RESTCONF northbound interface.

Generally, it’s a good practice to consider SDN controllers like this one as a stateless service. Especially in a complex and dynamic deployment with a bigger amount of services.

But if you want to initialize configurational datastore with some data right after startup, it’s possible with the “initialConfigData“ part of the configuration. For example, you insert connection information about a NETCONF device, so the lighty.io application will connect to it right after it starts.

Examples and a bit more explanation of these configuration options can be found in a lighty.io RESTCONF-NETCONF application README.md file.

Deployment

As mentioned in the beginning, we provide three main types of deployment: Helm chart for deployment in Kubernetes, Docker image, and a “zip” distribution containing all necessary jar files to run the application.

A step-by-step guide on how to build these artifacts from code can be found in a lighty.io RNC README.md file. It also contains steps on how to start and configure it.

Helm chart and Docker image can be also downloaded from public repositories.

Docker image can be downloaded from our GitHub Packages or via command:

docker pull ghcr.io/pantheontech/lighty-rnc:latest

Helm chart can be downloaded from our GitHub helm-charts repository and you can add it into your Helm environment via these commands:

helm repo add pantheon-helm-repo https://pantheontech.github.io/helm-charts/ 
helm repo update

Give lighty.io RNC a try

In case you need an SDN controller for NETCONF devices providing RESTCONF endpoints, give lighty.io RNC a try. The guides linked above should be pretty straightforward.

And if you need any help, got some cool ideas, or want to use our solutions, you can contact us here!


by Samuel Kontriš | Leave us your feedback on this post!

You can contact us here.

Explore our PANTHEON.tech GitHub.

Watch our YouTube Channel.

StoneWork + GNS3

[Tutorial] StoneWork + GNS3

April 5, 2021/in Blog, CDNF.io /by PANTHEON.tech

PANTHEON.tech has made its data plane for managing cloud-native network functions, StoneWork, available on the GNS 3 marketplace. This makes it easy for anybody to try out our all-in-one solution, which can combine multiple cloud-native network functions from our CNF portfolio, in a separate environment.

This tutorial will give you the basics on how to set-up StoneWork in an environment, where you can safely test out interaction and its positioning within your (simulated) network.

The goal of this tutorial is to have a basic setup, where we will:

  • Setup StoneWork interface IP address
  • Set the status of StoneWork to UP
  • Verify the connection by pinging the address

Read the complete post after subscribing:


CDNF.io YAML Editor

[Release] Cloud-Native Network Function YAML Editor

March 18, 2021/in Blog, CDNF.io /by PANTHEON.tech

Verify & Edit CNF YAML Configurations

CDNF.io YAML Editor, is an open-source, YAML configuration editor & verification tool. It is part of the CNF portfolio – as an added bonus, you can verify your cloud-native network function configuration with our tool!

CDNF.io YAML Editor Logo

The editor is available on the official website!

Features

  • YAML & JSON Schema Validation
  • Generating YAML Examples
  • Importing & Export of Configurations

YAML Configuration Validation

Import, or copy & paste a YAML configuration via the three-dot menu in the Configuration tab. We have conveniently placed an Examples folder, with a JSON Schema that serves as the

Errors will then be highlighted, against the imported JSON schema.

How-To: Validate your YAML file

  1. Visit the CDNF.io YAML Editor website
  2. Import/paste a valid draft-04 JSON Schema, or use the existing example, via the folder icon, in the JSON Schema tab, on the right.
    {
      "type": "object",
      "properties": {
        "user": {
          "type": "object",
          "properties": {
            "id": {
              "$ref": "#/definitions/positiveInt"
            },
            "name": {
              "type": "string"
            },
            "birthday": {
              "type": "string",
              "chance": {
                "birthday": {
                  "string": true
                }
              }
            },
            "email": {
              "type": "string",
              "format": "email"
            }
          },
          "required": [
            "id",
            "name",
            "birthday",
            "email"
          ]
        }
      },
      "required": [
        "user"
      ],
      "definitions": {
        "positiveInt": {
          "type": "integer",
          "minimum": 0,
          "minimumExclusive": true
        }
      }
    }
  3. Have a look at the generated Example YAML code in the YAML Example tab.

Invalid YAML File

  • Import, or copy & paste this invalid YAML example into the Configuration window
user:
  id: -33524623
  name: "Jon Snow"
  birthday: "19/12/283"
  email: "jonsnow@gmail.com"

Valid YAML File

  • Import, or copy & paste this valid YAML example into the Configuration window
user:
  id: 33524623
  name: "John Snow"
  birthday: "19/12/283"
  email: "jonsnow@gmail.com"

Limitations

The JSON Schema specification recommends to use the definitions key, where all definitions should be located. Then, you should use a relative path to point to the definitions.

Our implementation of the JSON schema requires a definitions object, if the ref ID links to a definition and does not use a relative path.

  • Supported: JSON Schema draft-04 (and included features, such as valid formats, etc.)
  • Not supported: Loading definitions from external URIs

Feedback for CNF Tools

Leave us your feedback here or create an Issue in the repository of the CDNF.io YAML Editor. Explore our portfolio of cloud-native network functions, developed by PANTHEON.tech.

Make sure to visit our playlist on YouTube!

Page 2 of 6‹1234›»

More @ PATHEON.tech

  • [What Are] PortChannels
  • [What Is] VLAN & VXLAN
  • [What Is] Whitebox Networking?
© 2025 PANTHEON.tech s.r.o | Privacy Policy | Cookie Policy
  • Link to LinkedIn
  • Link to Youtube
  • Link to X
  • Link to Instagram
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
View preferences
{title} {title} {title}
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
View preferences
{title} {title} {title}
Scroll to top